CSE443 Compilers. Dr. Carl Alphonce 343 Davis Hall

Similar documents
More Code Generation and Optimization. Pat Morin COMP 3002

CSE443 Compilers. Dr. Carl Alphonce 343 Davis Hall

CS 403 Compiler Construction Lecture 10 Code Optimization [Based on Chapter 8.5, 9.1 of Aho2]

Compiler Optimization Techniques

COMS W4115 Programming Languages and Translators Lecture 21: Code Optimization April 15, 2013

Code Generation. M.B.Chandak Lecture notes on Language Processing

Code optimization. Have we achieved optimal code? Impossible to answer! We make improvements to the code. Aim: faster code and/or less space

G Compiler Construction Lecture 12: Code Generation I. Mohamed Zahran (aka Z)

CODE GENERATION Monday, May 31, 2010

CSC D70: Compiler Optimization

Tour of common optimizations

CS153: Compilers Lecture 15: Local Optimization

CSE443 Compilers. Dr. Carl Alphonce 343 Davis Hall

Compiler Optimizations. Chapter 8, Section 8.5 Chapter 9, Section 9.1.7

What Do Compilers Do? How Can the Compiler Improve Performance? What Do We Mean By Optimization?

Compiler Optimizations. Chapter 8, Section 8.5 Chapter 9, Section 9.1.7

Running class Timing on Java HotSpot VM, 1

Goals of Program Optimization (1 of 2)

CSE443 Compilers. Dr. Carl Alphonce 343 Davis Hall

7. Optimization! Prof. O. Nierstrasz! Lecture notes by Marcus Denker!

CSE443 Compilers. Dr. Carl Alphonce 343 Davis Hall

Compiler construction 2009

Optimization. ASU Textbook Chapter 9. Tsan-sheng Hsu.

CSE115 / CSE503 Introduction to Computer Science I Dr. Carl Alphonce 343 Davis Hall Office hours:

Compiler Code Generation COMP360

CSE443 Compilers. Dr. Carl Alphonce 343 Davis Hall

Compiler construction in4303 lecture 9

CSE443 Compilers. Dr. Carl Alphonce 343 Davis Hall

Compiler Design. Fall Control-Flow Analysis. Prof. Pedro C. Diniz

CSE410 aka CSE306 Software Quality

Optimization Prof. James L. Frankel Harvard University

Group B Assignment 8. Title of Assignment: Problem Definition: Code optimization using DAG Perquisite: Lex, Yacc, Compiler Construction

Introduction to Code Optimization. Lecture 36: Local Optimization. Basic Blocks. Basic-Block Example

A Bad Name. CS 2210: Optimization. Register Allocation. Optimization. Reaching Definitions. Dataflow Analyses 4/10/2013

Compiler Passes. Optimization. The Role of the Optimizer. Optimizations. The Optimizer (or Middle End) Traditional Three-pass Compiler

CSE443 Compilers. Dr. Carl Alphonce 343 Davis Hall

Compiler Construction SMD163. Understanding Optimization: Optimization is not Magic: Goals of Optimization: Lecture 11: Introduction to optimization

Intermediate representation

CSE115 / CSE503 Introduction to Computer Science I Dr. Carl Alphonce 343 Davis Hall Office hours:

CSE 115 / 503 INTRODUCTION TO COMPUTER SCIENCE I. Dr. Carl Alphonce Dr. Jesse Hartloff

Compiler Design and Construction Optimization

Data Flow Analysis. Agenda CS738: Advanced Compiler Optimizations. 3-address Code Format. Assumptions

Lecture Notes on Loop Optimizations

USC 227 Office hours: 3-4 Monday and Wednesday CS553 Lecture 1 Introduction 4

Compiler Construction 2010/2011 Loop Optimizations

CSE115 / CSE503 Introduction to Computer Science I Dr. Carl Alphonce 343 Davis Hall Office hours:

CSE443 Compilers. Dr. Carl Alphonce 343 Davis Hall

Compiler Optimization and Code Generation

Compiler Construction 2016/2017 Loop Optimizations

16.10 Exercises. 372 Chapter 16 Code Improvement. be translated as

CSE115 / CSE503 Introduction to Computer Science I Dr. Carl Alphonce 343 Davis Hall Office hours:

Intermediate Code & Local Optimizations. Lecture 20

CS 406/534 Compiler Construction Putting It All Together

Middle End. Code Improvement (or Optimization) Analyzes IR and rewrites (or transforms) IR Primary goal is to reduce running time of the compiled code

Comp 204: Computer Systems and Their Implementation. Lecture 22: Code Generation and Optimisation

Machine-Independent Optimizations

A main goal is to achieve a better performance. Code Optimization. Chapter 9

MODEL ANSWERS COMP36512, May 2016

CSE115 / CSE503 Introduction to Computer Science I. Dr. Carl Alphonce 343 Davis Hall Office hours:

Lecture Outline. Intermediate code Intermediate Code & Local Optimizations. Local optimizations. Lecture 14. Next time: global optimizations

CS 701. Class Meets. Instructor. Teaching Assistant. Key Dates. Charles N. Fischer. Fall Tuesdays & Thursdays, 11:00 12: Engineering Hall

Control-Flow Graph and. Local Optimizations

OO software systems are systems of interacting objects.

THEORY OF COMPILATION

CSE115 / CSE503 Introduction to Computer Science I. Dr. Carl Alphonce 343 Davis Hall Office hours:

CS 6353 Compiler Construction, Homework #3

Principles of Compiler Design

Intermediate Representations. Reading & Topics. Intermediate Representations CS2210

Building a Runnable Program and Code Improvement. Dario Marasco, Greg Klepic, Tess DiStefano

Code generation and local optimization

CSE443 Compilers. Dr. Carl Alphonce 343 Davis Hall

Code generation and local optimization

CSE115 / CSE503 Introduction to Computer Science I Dr. Carl Alphonce 343 Davis Hall Office hours:

Programming Language. Control Structures: Selection (switch) Eng. Anis Nazer First Semester

Introduction to Optimization Local Value Numbering

PSD3A Principles of Compiler Design Unit : I-V. PSD3A- Principles of Compiler Design

CS577 Modern Language Processors. Spring 2018 Lecture Optimization

COMPILER DESIGN - CODE OPTIMIZATION

PRINCIPLES OF COMPILER DESIGN

CSE115 / CSE503 Introduction to Computer Science I. Dr. Carl Alphonce 343 Davis Hall Office hours:

CSE115 / CSE503 Introduction to Computer Science I. Dr. Carl Alphonce 343 Davis Hall Office hours:

Graphs. Data Structures and Algorithms CSE 373 SU 18 BEN JONES 1

Redundant Computation Elimination Optimizations. Redundancy Elimination. Value Numbering CS2210

Compiler Optimization Intermediate Representation

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING Subject Name: CS2352 Principles of Compiler Design Year/Sem : III/VI

Other Forms of Intermediate Code. Local Optimizations. Lecture 34

Administrative. Other Forms of Intermediate Code. Local Optimizations. Lecture 34. Code Generation Summary. Why Intermediate Languages?

Homework Assignment #1 Sample Solutions

Data Flow Analysis. Program Analysis

Lecture 3 Local Optimizations, Intro to SSA

Compiler Optimization

Code Generation (#')! *+%,-,.)" !"#$%! &$' (#')! 20"3)% +"#3"0- /)$)"0%#"

Office Hours: Mon/Wed 3:30-4:30 GDC Office Hours: Tue 3:30-4:30 Thu 3:30-4:30 GDC 5.

ECE 486/586. Computer Architecture. Lecture # 7

Local Optimization: Value Numbering The Desert Island Optimization. Comp 412 COMP 412 FALL Chapter 8 in EaC2e. target code

CSE 115 / 503 INTRODUCTION TO COMPUTER SCIENCE I. Dr. Carl Alphonce Dr. Jesse Hartloff

EECS 583 Class 8 Classic Optimization

What Compilers Can and Cannot Do. Saman Amarasinghe Fall 2009

Topic 9: Control Flow

CS202 Compiler Construction

Transcription:

CSE443 Compilers Dr. Carl Alphonce alphonce@buffalo.edu 343 Davis Hall http://www.cse.buffalo.edu/faculty/alphonce/sp17/cse443/index.php https://piazza.com/class/iybn4ndqa1s3ei

Announcements Grading survey - link posted in Piazza. Please respond by Sunday night. PR05 will be posted over the weekend. Due Friday 5/12. HW5 will be posted over the weekend. Due Monday 5/1.

Phases of a compiler Target machine code generation Figure 1.6, page 5 of text

Code Transformations on basic blocks Local optimizations can be performed on code inside basic blocks. Represent code inside a basic block as a DAG.

Example Figure 8.7 [p. 527] 1) i = 1 2) j = 1 3) t1 = 10 * i 4) t2 = t1 + j 5) t3 = 8 * t2 6) t4 = t3-88 7) a[t4] = 0.0 8) j = j + 1 9) if j<= 10 goto (3) 10)i = i + 1 11)if i <= 10 goto (2) 12)i = 1 13)t5 = i - 1 14)t6 = 88 * t5 15)a[t6] = 1.0 16)i = i + 1 17)if i <= 10 goto (13) Example from last class

Identifying leaders L L L L L L 1) i = 1 2) j = 1 3) t1 = 10 * i 4) t2 = t1 + j 5) t3 = 8 * t2 6) t4 = t3-88 7) a[t4] = 0.0 8) j = j + 1 9) if j<= 10 goto (3) 10)i = i + 1 11)if i <= 10 goto (2) 12)i = 1 13)t5 = i - 1 14)t6 = 88 * t5 15)a[t6] = 1.0 16)i = i + 1 17)if i <= 10 goto (13) Example from last class

B1 i = 1 ENTRY B2 j = 1 Example B3 t1 = 10 * i t2 = t1 + j t3 = 8 * t2 t4 = t3-88 a[t4] = 0.0 j = j + 1 if j<= 10 goto B3 B4 i = i + 1 if i <= 10 goto B2 B5 i = 1 B6 t5 = i - 1 t6 = 88 * t5 a[t6] = 1.0 i = i + 1 if i <= 10 goto B6 from last class Flow Graph Figure 8.9 [p. 530] Entry and exit nodes added. Jump targets replaced by block names. EXIT

Constructing DAG for basic blocks 1. For each variable in the block, create a node representing the variable's initial value. 2. For each statement in the block, create a node.

Constructing DAG for basic blocks 3. For each node representing a statement, label it with the operator applied. 4. For each node representing a statement, attach a list of the variables for which it is the last definition within the block.

Constructing DAG for basic blocks 5. For each node representing a statement, its children are the nodes that are the last definitions of the operands used in the statement. 6. Identify as output nodes those whose variables are live on exit from the block.

Example 8.10 [p. 534] 1) a = b + c 2) b = a - d 3) c = b + c 4) d = a - d

Example 8.10 [p. 534] Apply the "value-number" method from section 6.1.1 1) a = b + c 2) b = a - d 3) c = b + c 4) d = a - d b 0 c 0

Example 8.10 [p. 534] Apply the "value-number" method from section 6.1.1 1) a = b + c 2) b = a - d 3) c = b + c 4) d = a - d + a b 0 c 0

Example 8.10 [p. 534] Apply the "value-number" method from section 6.1.1 1) a = b + c 2) b = a - d - b 3) c = b + c 4) d = a - d + a d 0 b 0 c 0

Example 8.10 [p. 534] Apply the "value-number" method from section 6.1.1 + c 1) a = b + c 2) b = a - d - b 3) c = b + c 4) d = a - d + a d 0 b 0 c 0

Example 8.10 [p. 534] Apply the "value-number" method from section 6.1.1 + c 1) a = b + c 2) b = a - d - b,d 3) c = b + c + a d 0 4) d = a - d b 0 c 0

Example 8.10 [p. 534] If b is live on exit: 1) a = b + c 2) b = a - d - + b,d c 3) c = b + c 4) d = b + a d 0 b 0 c 0

Example 8.10 [p. 534] If b is not live on exit: 1) a = b + c 2) d = a - d - + d c 3) c = d + c + a d 0 b 0 c 0

8.5.3 Dead Code Elimination "Delete from a DAG any root [ ] that has no live variables attached." [p. 535] 1) a = b + c + e 2) b = b - d 3) c = c + d + a - b + c 4) e = b + c b 0 c 0 d 0

8.5.3 Dead Code Elimination "Delete from a DAG and root [ ] that has no live variables attached." [p. 535] If c and e are NOT live on exit: 1) a = b + c + e 2) b = b - d 3) c = c + d + a - b + c 4) e = b + c b 0 c 0 d 0

8.5.3 Dead Code Elimination "Delete from a DAG and root [ ] that has no live variables attached." [p. 535] Delete + node with e attached 1) a = b + c + e 2) b = b - d 3) c = c + d + a - b + c 4) e = b + c b 0 c 0 d 0

8.5.3 Dead Code Elimination "Delete from a DAG and root [ ] that has no live variables attached." [p. 535] Delete + node with c attached 1) a = b + c 2) b = b - d 3) c = c + d + a - b + c b 0 c 0 d 0

8.5.3 Dead Code Elimination "Delete from a DAG and root [ ] that has no live variables attached." [p. 535] Delete + node with c attached 1) a = b + c 2) b = b - d + a - b b 0 c 0 d 0

8.5.4 Algebraic Identities " apply arithmetic identities to eliminate computations from a basic block" [p. 536] x + 0 = 0 + x = x x * 1 = 1 * x = x x * 0 = 0 * x = 0 x - 0 = x x / 1 = x

8.5.4 Algebraic Identities "Another class of algebraic optimizations includes local reduction in strength replacing a more expensive operator by a cheaper one " [p. 536] x2 = x * x 2 * x = x + x (or shift L for int) x / 2 = x * 0.5 (or shift R for int)

8.5.4 Algebraic Identities "A third class is constant folding. evaluate constant expressions at compile time " [p. 536] 2 * 3.14 = 6.28 In footnote: "Arithmetic expressions should be evaluated the same way at compile time as they are at run time. [ ] compile the constant expression, execute the target code on the spot, and replace the expression with the result." [p. 536]. Consider the problem of cross-compilation.

8.5.4 Algebraic Identities "The DAG-construction process can help us apply these and other more general algebraic transformations such as commutativity and associativity." [p. 536] "Before we create a new node labeled * with left child M and right child N, we always check whether such a node already exists. However, because * is commutative, we should then check for a node having operator *, left child N, and right child M." [p. 536]

8.5.4 Algebraic Identities x < y may be tested by computing (y-x) and looking at the resulting condition codes. If the code already computes (y-x) it may not be necessary to compute this result twice.

8.5.4 Algebraic Identities Consider: a = b + c e = c + d + b

8.5.4 Algebraic Identities Consider: a = b + c e = c + d + b Note that the sum b + c is computed twice.

8.5.4 Algebraic Identities Consider: a = b + c e = c + d + b Note that the sum b + c is computed twice. Using both the associativity and commutativity of + we can rearrange: a = b + c e = b + c + d

8.5.4 Algebraic Identities Consider: a = b + c e = c + d + b Note that the sum b + c is computed twice. Using both the associativity and commutativity of + we can rearrange: a = b + c e = b + c + d and then simplify to: a = b + c e = a + d

8.5.5 Array References Array indexing must be handled with care. "An assignment from an array, like x = a[i], is represented by creating a node with operator =[] and two children representing the initial value of the array, a0 in this case, and the index i. Variable x becomes a label of this new node." [p. 537]

x = a[i] =[] x a 0 i

8.5.5 Array References "An assignment to an array, like a[j] = y, is represented by creating a node with operator []= and three children representing a0, j and y. There is no variable labeling this node. What is different is that the creation of this node kills all currently constructed nodes whose value depends on a0. A node that has been killed cannot receive any more labels; that is, it cannot become a common subexpression." [p. 537]

a[j] = y []= a 0 j y

Ex. 8.13 [p. 538] 1) x = a[i] 2) a[j] = y 3) z = a[i] Issue: we cannot assume that a[i] in 3rd statement is the same as a[i] in the first, as it may be that case that i = j

Ex. 8.13 [p. 538] 1) x = a[i] 2) a[j] = y 3) z = a[i] =[] x a 0 i0

Ex. 8.13 [p. 538] 1) x = a[i] Because the first node we built depends on a0, it is killed, meaning that no more variables can be added it its label from this point on. 2) a[j] = y 3) z = a[i] killed =[] x []= a 0 i0 j0 y0

Ex. 8.13 [p. 538] 1) x = a[i] 2) a[j] = y 3) z = a[i] =[] z killed The effect of this is that even thought the third statement involves a[i] we cannot structure share by adding z as a label next to x. Instead a new node must be constructed, forcing recomputation of this value. =[] x []= a 0 i0 j0 y0

Next week Monday: guest lecture by Kris W/F/M: 8.5 A few remaining details 8.6 A simple code generator 8.7 Peephole optimization 8.8 Register allocation & assignment 8.9 Instruction selection by tree rewriting 8.10 Optimal code generation for expressions 8.11 Dynamic programming code generation

Further ahead 9. Machine-Independent optimizations 10. Instruction-level Parallelism 11. Optimizing for Parallelism and Locality 12. Interprocedural Analysis