Lecture 25 CIS 341: COMPILERS

Similar documents
Lecture 23 CIS 341: COMPILERS

4/1/15 LLVM AND SSA. Low-Level Virtual Machine (LLVM) LLVM Compiler Infrastructure. LL: A Subset of LLVM. Basic Blocks

Lecture 21 CIS 341: COMPILERS

Control-Flow Analysis

CIS 341 Final Examination 4 May 2017

ECE 5775 (Fall 17) High-Level Digital Design Automation. Static Single Assignment

ECE 5775 High-Level Digital Design Automation Fall More CFG Static Single Assignment

CS 406/534 Compiler Construction Putting It All Together

An Overview of GCC Architecture (source: wikipedia) Control-Flow Analysis and Loop Detection

Intermediate representation

An example of optimization in LLVM. Compiler construction Step 1: Naive translation to LLVM. Step 2: Translating to SSA form (opt -mem2reg)

Compiler Construction 2009/2010 SSA Static Single Assignment Form

Lecture 15 CIS 341: COMPILERS

CS577 Modern Language Processors. Spring 2018 Lecture Optimization

Control Flow Analysis

Compiler Optimisation

Lecture 20 CIS 341: COMPILERS

Control Flow Analysis & Def-Use. Hwansoo Han

Usually, target code is semantically equivalent to source code, but not always!

Intermediate Representation (IR)

CS153: Compilers Lecture 23: Static Single Assignment Form

CIS 341 Final Examination 4 May 2018

CIS 341 Midterm February 28, Name (printed): Pennkey (login id): SOLUTIONS

Lecture 9 CIS 341: COMPILERS

15-411: LLVM. Jan Hoffmann. Substantial portions courtesy of Deby Katz

Name: CIS 341 Final Examination 10 December 2008

6. Intermediate Representation

Plan for Today. Concepts. Next Time. Some slides are from Calvin Lin s grad compiler slides. CS553 Lecture 2 Optimizations and LLVM 1

Control flow and loop detection. TDT4205 Lecture 29

CSE P 501 Compilers. SSA Hal Perkins Spring UW CSE P 501 Spring 2018 V-1

Combining Analyses, Combining Optimizations - Summary

Lecture 13 CIS 341: COMPILERS

Intermediate Representations

Advanced Compiler Construction

CIS 341 Final Examination 3 May 2011

9/5/17. The Design and Implementation of Programming Languages. Compilation. Interpretation. Compilation vs. Interpretation. Hybrid Implementation

A Bad Name. CS 2210: Optimization. Register Allocation. Optimization. Reaching Definitions. Dataflow Analyses 4/10/2013

Intermediate Representations

Control Flow Analysis

Programming Language Processor Theory

What is a compiler? var a var b mov 3 a mov 4 r1 cmpi a r1 jge l_e mov 2 b jmp l_d l_e: mov 3 b l_d: ;done

Compiling Techniques

LLVM and IR Construction

Compiler Structure. Data Flow Analysis. Control-Flow Graph. Available Expressions. Data Flow Facts

Undergraduate Compilers in a Day

CS 415 Midterm Exam Spring 2002

8. Static Single Assignment Form. Marcus Denker

6. Intermediate Representation!

Homework I - Solution

Intermediate Representations. Reading & Topics. Intermediate Representations CS2210

What is a compiler? Xiaokang Qiu Purdue University. August 21, 2017 ECE 573

What If. Static Single Assignment Form. Phi Functions. What does φ(v i,v j ) Mean?

CS415 Compilers. Intermediate Represeation & Code Generation

Lecture 2: Control Flow Analysis

CMSC 350: COMPILER DESIGN

Expressing high level optimizations within LLVM. Artur Pilipenko

SSA Construction. Daniel Grund & Sebastian Hack. CC Winter Term 09/10. Saarland University

Computing Static Single Assignment (SSA) Form. Control Flow Analysis. Overview. Last Time. Constant propagation Dominator relationships

Intermediate Code Generation

CIS 341 Midterm March 2, 2017 SOLUTIONS

Compiler Design. Fall Data-Flow Analysis. Sample Exercises and Solutions. Prof. Pedro C. Diniz

IR Optimization. May 15th, Tuesday, May 14, 13

CS5363 Final Review. cs5363 1

Introduction to Machine-Independent Optimizations - 4

Advanced C Programming

Intermediate Representations Part II

CSE 501: Compiler Construction. Course outline. Goals for language implementation. Why study compilers? Models of compilation

Overview of the Class

CIS 341 Final Examination 30 April 2013

Lecture 7 CIS 341: COMPILERS

Lecture Compiler Middle-End

Why Global Dataflow Analysis?

Topic I (d): Static Single Assignment Form (SSA)

Compiler Construction: LLVMlite

CS 6110 S14 Lecture 38 Abstract Interpretation 30 April 2014

Compilers. Lecture 2 Overview. (original slides by Sam

Principle of Complier Design Prof. Y. N. Srikant Department of Computer Science and Automation Indian Institute of Science, Bangalore

CS558 Programming Languages

The View from 35,000 Feet

Announcements. My office hours are today in Gates 160 from 1PM-3PM. Programming Project 3 checkpoint due tomorrow night at 11:59PM.

CS 426 Fall Machine Problem 4. Machine Problem 4. CS 426 Compiler Construction Fall Semester 2017

Code Merge. Flow Analysis. bookkeeping

CJT^jL rafting Cm ompiler

CSE Section 10 - Dataflow and Single Static Assignment - Solutions

5. Semantic Analysis!

Semantic Analysis. Lecture 9. February 7, 2018

Lecture 3 Overview of the LLVM Compiler

Prof. Mohamed Hamada Software Engineering Lab. The University of Aizu Japan

Chapter 13: Reference. Why reference Typing Evaluation Store Typings Safety Notes

Compiler Optimizations. Chapter 8, Section 8.5 Chapter 9, Section 9.1.7

Computing Inside The Parser Syntax-Directed Translation. Comp 412

Compiler Design Spring 2018

CS 565: Programming Languages. Spring 2008 Tu, Th: 16:30-17:45 Room LWSN 1106

2. Reachability in garbage collection is just an approximation of garbage.

Program Static Analysis. Overview

Lecture08: Scope and Lexical Address

Example. Example. Constant Propagation in SSA

INTRODUCTION TO LLVM Bo Wang SA 2016 Fall

Faculty of Electrical Engineering, Mathematics, and Computer Science Delft University of Technology

Soot A Java Bytecode Optimization Framework. Sable Research Group School of Computer Science McGill University

Transcription:

Lecture 25 CIS 341: COMPILERS

Announcements HW6: Dataflow Analysis Due: Weds. April 26 th NOTE: See Piazza for an update TLDR: "simple" regalloc should not suffice. Change gradedtests.ml >= to > FINAL EXAM Thursday, May 4 th noon 2:00p.m. Location: DRLB A4 Zdancewic CIS 341: Compilers 2

LOOPS AND DOMINATORS Zdancewic CIS 341: Compilers 3

Loops in Control-flow Graphs Taking into account loops is important for optimizations. The 90/10 rule applies, so optimizing loop bodies is important Should we apply loop optimizations at the AST level or at a lower representation? Loop optimizations benefit from other IR-level optimizations and viceversa, so it is good to interleave them. Loops may be hard to recognize at the quadruple / LLVM IR level. Many kinds of loops: while, do/while, for, continue, goto Problem: How do we identify loops in the control-flow graph? CIS 341: Compilers 4

Definition of a Loop A loop is a set of nodes in the control flow graph. One distinguished entry point called the header Every node is reachable from the header & the header is reachable from every node. A loop is a strongly connected component No edges enter the loop except to the header Nodes with outgoing edges are called loop exit nodes header exit node loop nodes CIS 341: Compilers 5

Nested Loops Control-flow graphs may contain many loops Loops may contain other loops: Control Tree: The control tree depicts the nesting structure of the loops in the program. CIS 341: Compilers 6

Control-flow Analysis Goal: Identify the loops and nesting structure of the CFG. Control flow analysis is based on the idea of dominators: Node A dominates node B if the only way to reach B from the start node is through node A. An edge in the graph is a back edge if the target node dominates the source node. Back Edge A loop contains at least one back edge. CIS 341: Compilers 7

Domination is transitive: Dominator Trees if A dominates B and B dominates C then A dominates C Domination is anti-symmetric: if A dominates B and B dominates A then A = B Every flow graph has a dominator tree The Hasse diagram of the dominates relation 1 1 2 2 3 4 3 4 5 6 5 6 7 8 7 8 9 0 9 0 CIS 341: Compilers 8

Dominator Dataflow Analysis We can define Dom[n] as a forward dataflow analysis. Using the framework we saw earlier: Dom[n] = out[n] where: A node B is dominated by another node A if A dominates all of the predecessors of B. in[n] := n pred[n] out[n ] Every node dominates itself. out[n] := in[n] {n} Formally: L = set of nodes ordered by T = {all nodes} F n (x) = x {n} is Easy to show monotonicity and that F n distributes over meet. So algorithm terminates and is MOP CIS 341: Compilers 9

Improving the Algorithm Dom[b] contains just those nodes along the path in the dominator tree from the root to b: e.g. Dom[8] = {1,2,4,8}, Dom[7] = {1,2,4,5,7} 1 There is a lot of sharing among the nodes More efficient way to represent Dom sets is to store the dominator tree. doms[b] = immediate dominator of b doms[8] = 4, doms[7] = 5 To compute Dom[b] walk through doms[b] Need to efficiently compute intersections of Dom[a] and Dom[b] Traverse up tree, looking for least common ancestor: Dom[8] Dom[7] = Dom[4] 2 3 4 5 6 7 8 9 0 See: A Simple, Fast Dominance Algorithm Cooper, Harvey, and Kennedy CIS 341: Compilers 10

Completing Control-flow Analysis Dominator analysis identifies back edges: Edge n à h where h dominates n Each back edge has a natural loop: h is the header All nodes reachable from h that also reach n without going through h For each back edge n à h, find the natural loop: {n n is reachable from n in G {h}} {h} n h Two loops may share the same header: merge them n h m Nesting structure of loops is determined by set inclusion Can be used to build the control tree CIS 341: Compilers 11

Example Natural Loops 1 Control Tree: 2 3 4 5 6 9 7 8 Natural Loops 0 The control tree depicts the nesting structure of the program. CIS 341: Compilers 12

Uses of Control-flow Information Loop nesting depth plays an important role in optimization heuristics. Deeply nested loops pay off the most for optimization. Need to know loop headers / back edges for doing loop invariant code motion loop unrolling Dominance information also plays a role in converting to SSA form Used internally by LLVM to do register allocation. CIS 341: Compilers 13

Phi nodes Alloc promotion Register allocation REVISITING SSA Zdancewic CIS 341: Compilers 14

Single Static Assignment (SSA) LLVM IR names (via %uids) all intermediate values computed by the program. It makes the order of evaluation explicit. Each %uid is assigned to only once Contrast with the mutable quadruple form Note that dataflow analyses had these kill[n] sets because of updates to variables Naïve implementation of backend: map %uids to stack slots Better implementation: map %uids to registers (as much as possible) Question: How do we convert a source program to make maximal use of %uids, rather than alloca-created storage? two problems: control flow & location in memory Then: How do we convert SSA code to x86, mapping %uids to registers? Register allocation. CIS 341: Compilers 15

Alloca vs. %UID Current compilation strategy: int x = 3; int y = 0; x = x + 1; y = x + 2; %x = alloca i64 %y = alloca i64 store i64* %x, 3 store i64* %y, 0 %x1 = load %i64* %x %tmp1 = add i64 %x1, 1 store i64* %x, %tmp1 %x2 = load %i64* %x %tmp2 = add i64 %x2, 2 store i64* %y, %tmp2 Directly map source variables into %uids? int x = 3; int y = 0; x = x + 1; y = x + 2; int x1 = 3; int y1 = 0; x2 = x1 + 1; y2 = x2 + 2; %x1 = add i64 3, 0 %y1 = add i64 0, 0 %x2 = add i64 %x1, 1 %y2 = add i64 %x2, 2 Does this always work? Zdancewic CIS 341: Compilers 16

What about If-then-else? How do we translate this into SSA? int y = int x = int z = if (p) { x = y + 1; } else { x = y * 2; } z = x + 3; entry: %y1 = %x1 = %z1 = %p = icmp br i1 %p, label %then, label %else then: %x2 = add i64 %y1, 1 br label %merge else: %x3 = mult i64 %y1, 2 merge: %z2 = %add i64???, 3 What do we put for??? CIS 341: Compilers 17

Solution: φ functions Phi Functions Fictitious operator, used only for analysis implemented by Mov at x86 level Chooses among different versions of a variable based on the path by which control enters the phi node. %uid = phi <ty> v 1, <label 1 >,, v n, <label n > int y = int x = int z = if (p) { x = y + 1; } else { x = y * 2; } z = x + 3; entry: %y1 = %x1 = %z1 = %p = icmp br i1 %p, label %then, label %else then: %x2 = add i64 %y1, 1 br label %merge else: %x3 = mult i64 %y1, 2 merge: %x4 = phi i64 %x2, %then, %x3, %else %z2 = %add i64 %x4, 3 Zdancewic CIS 341: Compilers 18

Phi Nodes and Loops Importantly, the %uids on the right-hand side of a phi node can be defined later in the control-flow graph. Means that %uids can hold values around a loop Scope of %uids is defined by dominance (discussed soon) entry: %y1 = %x1 = br label %body body: %x2 = phi i64 %x1, %entry, %x3, %body %x3 = add i64 %x2, %y1 %p = icmp slt i64, %x3, 10 br i1 %p, label %body, label %after after: Zdancewic CIS 341: Compilers 19

Alloca Promotion Not all source variables can be allocated to registers If the address of the variable is taken (as permitted in C, for example) If the address of the variable escapes (by being passed to a function) An alloca instruction is called promotable if neither of the two conditions above holds entry: %x = alloca i64 // %x cannot be promoted %y = call malloc(i64 8) %ptr = bitcast i8* %y to i64** store i65** %ptr, %x // store the pointer into the heap entry: %x = alloca i64 // %x cannot be promoted %y = call foo(i64* %x) // foo may store the pointer into the heap Happily, most local variables declared in source programs are promotable That means they can be register allocated Zdancewic CIS 341: Compilers 20

Converting to SSA: Overview Start with the ordinary control flow graph that uses allocas Identify promotable allocas Compute dominator tree information Calculate def/use information for each such allocated variable Insert φ functions for each variable at necessary join points Replace loads/stores to alloc ed variables with freshly-generated %uids Eliminate the now unneeded load/store/alloca instructions. CIS 341: Compilers 21

Where to Place φ functions? Need to calculate the Dominance Frontier Node A strictly dominates node B if A dominates B and A B. Note: A does not strictly dominate B if A does not dominate B or A = B. The dominance frontier of a node B is the set of all CFG nodes y such that B dominates a predecessor of y but does not strictly dominate y Intuitively: starting at B, there is a path to y, but there is another route to y that does not go through B Write DF[n] for the dominance frontier of node n. CIS 341: Compilers 22

Dominance Frontiers Example of a dominance frontier calculation results DF[1] = {1}, DF[2] = {1,2}, DF[3] = {2}, DF[4] = {1}, DF[5] = {8,0}, DF[6] = {8}, DF[7] = {7,0}, DF[8] = {0}, DF[9] = {7,0}, DF[0] = {} 1 1 2 2 3 4 3 4 5 6 5 6 7 8 7 8 9 0 Control-flow Graph 9 0 Dominator Tree CIS 341: Compilers 23

Algorithm For Computing DF[n] Assume that doms[n] stores the dominator tree (so that doms[n] is the immediate dominator of n in the tree) Adds each B to the DF sets to which it belongs for all nodes B if #(pred[b]) 2 } for each p pred[b] { runner := p while (runner doms[b]) DF[runner] := DF[runner] {B} runner := doms[runner] // (just an optimization) // start at the predecessor of B // walk up the tree adding B CIS 341: Compilers 24

Insert φ at Join Points Lift the DF[n] to a set of nodes N in the obvious way: DF[N] = n N DF[n] Suppose that at variable x is defined at a set of nodes N. DF 0 [N] = DF[N] DF i+1 [N] = DF[DF i [N] N] Let J[N] be the least fixed point of the sequence: DF 0 [N] DF 1 [N] DF 2 [N] DF 3 [N] That is, J[N] = DF k [N] for some k such that DF k [N] = DF k+1 [N] J[N] is called the join points for the set N We insert φ functions for the variable x at each such join point. x = φ(x, x,, x); (one x argument for each predecessor of the node) In practice, J[N] is never directly computed, instead you use a worklist algorithm that keeps adding nodes for DF k [N] until there are no changes. Intuition: If N is the set of places where x is modified, then DF[N] is the places where phi nodes need to be added, but those also count as modifications of x, so we need to insert the phi nodes to capture those modifications too CIS 341: Compilers 25

Example Join-point Calculation Suppose the variable x is modified at nodes 3 and 6 Where would we need to add phi nodes? DF 0 [{3,6}] = DF[{3,6}] = DF[3] DF[6] = {2,8} DF 1 [{3,6}] = DF[DF 0 {3,6} {3,6}] = DF[{2,3,6,8}] = DF[2] DF[3] DF[6] DF[8] = {1,2} {2} {8} {0} = {1,2,8,0} DF 2 [{3,6}] =... = {1,2,8,0} So J[{3,6}] = {1,2,8,0} and we need to add phi nodes at those four spots. Zdancewic CIS 341: Compilers 26

Phi Placement Alternative Less efficient, but easier to understand: Place phi nodes "maximally" (i.e. at every node with > 2 predecessors) If all values flowing into phi node are the same, then eliminate it: %x = phi t %y, %pred1 t %y %pred2 t %y %predk // code that uses %x // code with %x replaced by %y Interleave with other optimizations copy propagation constant propagation etc. Zdancewic CIS 341: Compilers 27

Example SSA Optimizations l 1 : %p = alloca i64 store 0, %p %b = %y > 0 br %b, %l 2, %l 3 Find alloca max φs How to place phi nodes without breaking SSA? l 2 : store 1, %p br %l 3 LAS/ LAA DSE Note: the real implementation combines many of these steps into one pass. Places phis directly at the dominance frontier l 3 : DAE This example also illustrates other common optimizations: Load after store/alloca %x = load %p ret %x elim φs Dead store/alloca elimination

Example SSA Optimizations l 1 : %p = alloca i64 store 0, %p %b = %y > 0 %x 1 = load %p br %b, %l 2, %l 3 Find alloca max φs How to place phi nodes without breaking SSA? l 2 : store 1, %p %x 2 = load %p br %l 3 LAS/ LAA DSE Insert Loads at the end of each block l 3 : DAE %x = load %p ret %x elim φs

Example SSA Optimizations l 1 : %p = alloca i64 store 0, %p %b = %y > 0 %x 1 = load %p br %b, %l 2, %l 3 l 2 : %x 3 = φ[%x 1,%l 1 ] store 1, %p %x 2 = load %p br %l 3 l 3 : %x 4 = φ[%x 1 ;%l 1, %x 2 :%l 2 ] %x = load %p ret %x Find alloca max φs LAS/ LAA DSE DAE elim φs How to place phi nodes without breaking SSA? Insert Loads at the end of each block Insert φ-nodes at each block

Example SSA Optimizations l 1 : %p = alloca i64 store 0, %p %b = %y > 0 %x 1 = load %p br %b, %l 2, %l 3 l 2 : %x 3 = φ[%x 1,%l 1 ] store %x 3, %p store 1, %p %x 2 = load %p br %l 3 l 3 : %x 4 = φ[%x 1 ;%l 1, %x 2 :%l 2 ] store %x 4, %p %x = load %p ret %x Find alloca max φs LAS/ LAA DSE DAE elim φs How to place phi nodes without breaking SSA? Insert Loads at the end of each block Insert φ-nodes at each block Insert stores after φ-nodes

Example SSA Optimizations l 1 : %p = alloca i64 store 0, %p %b = %y > 0 %x 1 = load %p br %b, %l 2, %l 3 l 2 : %x 3 = φ[%x 1,%l 1 ] store %x 3, %p store 1, %p %x 2 = load %p br %l 3 Find alloca max φs LAS/ LAA DSE For loads after stores (LAS): Substitute all uses of the load by the value being stored Remove the load l 3 : %x 4 = φ[%x 1 ;%l 1, %x 2 :%l 2 ] store %x 4, %p %x = load %p ret %x DAE elim φs

Example SSA Optimizations l 1 : %p = alloca i64 store 0, %p %b = %y > 0 %x 1 = load %p br %b, %l 2, %l 3 l 2 : %x 3 = φ[%x 1,%l 1 ] store %x 3, %p store 1, %p %x 2 = load %p br %l 3 Find alloca max φs LAS/ LAA DSE For loads after stores (LAS): Substitute all uses of the load by the value being stored Remove the load l 3 : %x 4 = φ[%x 1 ;%l 1, %x 2 :%l 2 ] store %x 4, %p %x = load %p ret %x DAE elim φs

Example SSA Optimizations l 1 : %p = alloca i64 store 0, %p %b = %y > 0 %x 1 = load %p br %b, %l 2, %l 3 l 2 : %x 3 = φ[0,%l 1 ] store %x 3, %p store 1, %p %x 2 = load %p br %l 3 Find alloca max φs LAS/ LAA DSE For loads after stores (LAS): Substitute all uses of the load by the value being stored Remove the load l 3 : %x 4 = φ[0;%l 1, %x 2 :%l 2 ] store %x 4, %p %x = load %p ret %x DAE elim φs

Example SSA Optimizations l 1 : %p = alloca i64 store 0, %p %b = %y > 0 br %b, %l 2, %l 3 l 2 : %x 3 = φ[0,%l 1 ] store %x 3, %p store 1, %p %x 2 = load %p br %l 3 Find alloca max φs LAS/ LAA DSE For loads after stores (LAS): Substitute all uses of the load by the value being stored Remove the load l 3 : %x 4 = φ[0;%l 1, %x 2 :%l 2 ] store %x 4, %p %x = load %p ret %x DAE elim φs

Example SSA Optimizations l 1 : %p = alloca i64 store 0, %p %b = %y > 0 br %b, %l 2, %l 3 l 2 : %x 3 = φ[0,%l 1 ] store %x 3, %p store 1, %p %x 2 = load %p br %l 3 Find alloca max φs LAS/ LAA DSE For loads after stores (LAS): Substitute all uses of the load by the value being stored Remove the load l 3 : %x 4 = φ[0;%l 1, 1:%l 2 ] store %x 4, %p %x = load %p ret %x DAE elim φs

Example SSA Optimizations l 1 : %p = alloca i64 store 0, %p %b = %y > 0 br %b, %l 2, %l 3 l 2 : %x 3 = φ[0,%l 1 ] store %x 3, %p store 1, %p br %l 3 Find alloca max φs LAS/ LAA DSE For loads after stores (LAS): Substitute all uses of the load by the value being stored Remove the load l 3 : %x 4 = φ[0;%l 1, 1:%l 2 ] store %x 4, %p %x = load %p ret %x DAE elim φs

Example SSA Optimizations l 1 : %p = alloca i64 store 0, %p %b = %y > 0 br %b, %l 2, %l 3 l 2 : %x 3 = φ[0,%l 1 ] store %x 3, %p store 1, %p br %l 3 Find alloca max φs LAS/ LAA DSE For loads after stores (LAS): Substitute all uses of the load by the value being stored Remove the load l 3 : %x 4 = φ[0;%l 1, 1:%l 2 ] store %x 4, %p %x = load %p ret %x 4 DAE elim φs

Example SSA Optimizations l 1 : %p = alloca i64 store 0, %p %b = %y > 0 br %b, %l 2, %l 3 l 2 : %x 3 = φ[0,%l 1 ] store %x 3, %p store 1, %p br %l 3 l 3 : %x 4 = φ[0;%l 1, 1:%l 2 ] store %x 4, %p ret %x 4 Find alloca max φs LAS/ LAA DSE DAE elim φs Dead Store Elimination (DSE) Eliminate all stores with no subsequent loads. Dead Alloca Elimination (DAE) Eliminate all allocas with no subsequent loads/stores.

Example SSA Optimizations l 1 : %p = alloca i64 store 0, %p %b = %y > 0 br %b, %l 2, %l 3 l 2 : %x 3 = φ[0,%l 1 ] store %x 3, %p store 1, %p br %l 3 l 3 : %x 4 = φ[0;%l 1, 1:%l 2 ] store %x 4, %p ret %x 4 Find alloca max φs LAS/ LAA DSE DAE elim φs Dead Store Elimination (DSE) Eliminate all stores with no subsequent loads. Dead Alloca Elimination (DAE) Eliminate all allocas with no subsequent loads/stores.

Example SSA Optimizations l 1 : %b = %y > 0 br %b, %l 2, %l 3 l 2 : %x 3 = φ[0,%l 1 ] br %l 3 Find alloca max φs LAS/ LAA DSE Eliminate φ nodes: Singletons With identical values from each predecessor See Aycock & Horspool, 2002 l 3 : %x 4 = φ[0;%l 1, 1:%l 2 ] DAE ret %x 4 elim φs

Example SSA Optimizations l 1 : %b = %y > 0 br %b, %l 2, %l 3 l 2 : %x 3 = φ[0,%l 1 ] Find alloca max φs LAS/ LAA Eliminate φ nodes: Singletons With identical values from each predecessor br %l 3 DSE l 3 : %x 4 = φ[0;%l 1, 1:%l 2 ] DAE ret %x 4 elim φs

Example SSA Optimizations l 1 : %b = %y > 0 Find alloca Done! br %b, %l 2, %l 3 max φs l 2 : LAS/ LAA br %l 3 DSE l 3 : %x 4 = φ[0;%l 1, 1:%l 2 ] DAE ret %x 4 elim φ

LLVM Phi Placement This transformation is also sometimes called register promotion older versions of LLVM called this mem2reg memory to register promotion In practice, LLVM combines this transformation with scalar replacement of aggregates (SROA) i.e. transforming loads/stores of structured data into loads/stores on register-sized data These algorithms are (one reason) why LLVM IR allows annotation of predecessor information in the.ll files Simplifies computing the DF Zdancewic CIS 341: Compilers 44

Thursday, May 4 th noon 2:00p.m. Location: DRLB A4 FINAL EXAM Zdancewic CIS 341: Compilers 45

Final Exam Will cover material since the midterm almost exclusively Starting from Lecture 14 Typechecking Objects, inheritance, types, implementation of dynamic dispatch Basic optimizations Dataflow analysis (forward vs. backward, fixpoint computations, etc.) Liveness Graph-coloring Register Allocation Control flow analysis Loops, dominator trees Will focus more on the theory side of things Format will be similar to the midterm Simple answer, computation, multiple choice, etc. Sample exam from last time is on the web CIS 341: Compilers 46

What have we learned? Where else is it applicable? What next? COURSE WRAP-UP Zdancewic CIS 341: Compilers 47

You will learn: Practical applications of theory Parsing Why CIS 341? How high-level languages are implemented in machine language (A subset of) Intel x86 architecture A deeper understanding of code A little about programming language semantics Functional programming in OCaml How to manipulate complex data structures How to be a better programmer Did we meet these goals? CIS 341: Compilers 48

Stuff we didn t Cover We skipped stuff at every level Concrete syntax/parsing: Much more to the theory of parsing Good syntax is art not science! Source language features: Exceptions, recursive data types (easy!), advanced type systems, type inference, concurrency Intermediate languages: Intermediate language design, bytecode, bytecode interpreters, just-intime compilation (JIT) Compilation: Continuation-passing transformation, efficient representations, scalability Optimization: Scientific computing, cache optimization, instruction selection/ optimization Runtime support: memory management, garbage collections CIS 341: Compilers 49

CIS 500: Software Foundations Related Courses Prof. Pierce Theoretical course about functional programming, proving program properties, type systems, lambda calculus. Uses the theorem prover Coq. CIS 501: Computer Architecture Prof. Devietti 371++: pipelining, caches, VM, superscalar, multicore, CIS 552: Advanced Programming Prof. Weirich Advanced functional programming in Haskell, including generic programming, metaprogramming, embedded languages, cool tricks with fancy type systems CIS 670: Special topics in programming languages CIS 341: Compilers 50

Where to go from here? Conferences (proceedings available on the web): Programming Language Design and Implementation (PLDI) Principles of Programming Langugaes (POPL) Object Oriented Programming Systems, Languages & Applications (OOPSLA) International Conference on Functional Programming (ICFP) European Symposium on Programming (ESOP) Technologies / Open Source Projects Yacc, lex, bison, flex, LLVM low level virtual machine Java virtual machine (JVM), Microsoft s Common Language Runtime (CLR) Languages: OCaml, F#, Haskell, Scala, Go, Rust,? CIS 341: Compilers 51

Where else is this stuff applicable? General programming In C/C++, better understanding of how the compiler works can help you generate better code. Ability to read assembly output from compiler Experience with functional programming can give you different ways to think about how to solve a problem Writing domain specific languages lex/yacc very useful for little utilities understanding abstract syntax and interpretation Understanding hardware/software interface Different devices have different instruction sets, programming models CIS 341: Compilers 52

Thanks! To the TAs: Dmitri, Richard, J.J. and Vivek for doing an amazing job putting together the projects for the course. To you for taking the class! How can I improve the course? Feedback survey posted to Piazza (soon!) CIS 341: Compilers 53