CS 403 Compiler Construction Lecture 10 Code Optimization [Based on Chapter 8.5, 9.1 of Aho2]

Similar documents
COMS W4115 Programming Languages and Translators Lecture 21: Code Optimization April 15, 2013

CSE443 Compilers. Dr. Carl Alphonce 343 Davis Hall

More Code Generation and Optimization. Pat Morin COMP 3002

Code optimization. Have we achieved optimal code? Impossible to answer! We make improvements to the code. Aim: faster code and/or less space

Goals of Program Optimization (1 of 2)

Machine-Independent Optimizations

Induction Variable Identification (cont)

CODE GENERATION Monday, May 31, 2010

Compiler Optimizations. Chapter 8, Section 8.5 Chapter 9, Section 9.1.7

7. Optimization! Prof. O. Nierstrasz! Lecture notes by Marcus Denker!

Topic 9: Control Flow

Compiler Optimizations. Chapter 8, Section 8.5 Chapter 9, Section 9.1.7

Lecture Notes on Loop Optimizations

Compiler Optimization Techniques

COMPILER DESIGN - CODE OPTIMIZATION

What Do Compilers Do? How Can the Compiler Improve Performance? What Do We Mean By Optimization?

Group B Assignment 8. Title of Assignment: Problem Definition: Code optimization using DAG Perquisite: Lex, Yacc, Compiler Construction

A Bad Name. CS 2210: Optimization. Register Allocation. Optimization. Reaching Definitions. Dataflow Analyses 4/10/2013

CS153: Compilers Lecture 15: Local Optimization

Compiler Optimization

CS 701. Class Meets. Instructor. Teaching Assistant. Key Dates. Charles N. Fischer. Fall Tuesdays & Thursdays, 11:00 12: Engineering Hall

Loop Invariant Code Motion. Background: ud- and du-chains. Upward Exposed Uses. Identifying Loop Invariant Code. Last Time Control flow analysis

Compiler Construction SMD163. Understanding Optimization: Optimization is not Magic: Goals of Optimization: Lecture 11: Introduction to optimization

Compiler Theory. (Intermediate Code Generation Abstract S yntax + 3 Address Code)

Introduction to Code Optimization. Lecture 36: Local Optimization. Basic Blocks. Basic-Block Example

Lecture Outline. Intermediate code Intermediate Code & Local Optimizations. Local optimizations. Lecture 14. Next time: global optimizations

Compiler Optimization Intermediate Representation

CSC D70: Compiler Optimization

Optimization Prof. James L. Frankel Harvard University

Compiler Optimization and Code Generation

Loop Optimizations. Outline. Loop Invariant Code Motion. Induction Variables. Loop Invariant Code Motion. Loop Invariant Code Motion

Languages and Compiler Design II IR Code Optimization

Intermediate Code & Local Optimizations. Lecture 20

Compiler Design. Fall Control-Flow Analysis. Prof. Pedro C. Diniz

Code Optimization. Code Optimization

G Compiler Construction Lecture 12: Code Generation I. Mohamed Zahran (aka Z)

Compiler Passes. Optimization. The Role of the Optimizer. Optimizations. The Optimizer (or Middle End) Traditional Three-pass Compiler

Running class Timing on Java HotSpot VM, 1

Lecture 3 Local Optimizations, Intro to SSA

Tour of common optimizations

Calvin Lin The University of Texas at Austin

Compiler Construction 2010/2011 Loop Optimizations

Review. Pat Morin COMP 3002

Compiler Construction 2016/2017 Loop Optimizations

Lecture Notes on Value Propagation

Compiler construction 2009

8 Optimisation. 8.1 Introduction to Optimisation

Compiler Design Prof. Y. N. Srikant Department of Computer Science and Automation Indian Institute of Science, Bangalore

CMSC 611: Advanced Computer Architecture

Comp 204: Computer Systems and Their Implementation. Lecture 22: Code Generation and Optimisation

Optimization. ASU Textbook Chapter 9. Tsan-sheng Hsu.

Introduction to Machine-Independent Optimizations - 1

Sardar Vallabhbhai Patel Institute of Technology (SVIT), Vasad M.C.A. Department COSMOS LECTURE SERIES ( ) (ODD) Code Optimization

CS 6353 Compiler Construction, Homework #3

Multi-dimensional Arrays

CS577 Modern Language Processors. Spring 2018 Lecture Optimization

HPC VT Machine-dependent Optimization

CS202 Compiler Construction

Principles of Compiler Design

CPSC 510 (Fall 2007, Winter 2008) Compiler Construction II

Redundant Computation Elimination Optimizations. Redundancy Elimination. Value Numbering CS2210

CS 403 Compiler Construction Lecture 8 Syntax Tree and Intermediate Code Generation [Based on Chapter 6 of Aho2] This Lecture

Middle End. Code Improvement (or Optimization) Analyzes IR and rewrites (or transforms) IR Primary goal is to reduce running time of the compiled code

Compiler Code Generation COMP360

Intermediate representation

Compiler Design and Construction Optimization

Algebra 1 Review. Properties of Real Numbers. Algebraic Expressions

Static Single Assignment Form in the COINS Compiler Infrastructure

What Compilers Can and Cannot Do. Saman Amarasinghe Fall 2009

UNIT-V. Symbol Table & Run-Time Environments Symbol Table

Building a Runnable Program and Code Improvement. Dario Marasco, Greg Klepic, Tess DiStefano

Compiler construction in4303 lecture 9

Code Generation: Integrated Instruction Selection and Register Allocation Algorithms

Intermediate Representations. Reading & Topics. Intermediate Representations CS2210

A main goal is to achieve a better performance. Code Optimization. Chapter 9

COMPILER CONSTRUCTION (Intermediate Code: three address, control flow)

Using Static Single Assignment Form

Code Generation (#')! *+%,-,.)" !"#$%! &$' (#')! 20"3)% +"#3"0- /)$)"0%#"

University of Technology Department of Computer Sciences. Final Examination st Term. Subject:Compilers Design

Midterm 2. CMSC 430 Introduction to Compilers Fall Instructions Total 100. Name: November 11, 2015

Introduction to Optimization Local Value Numbering

ICS 252 Introduction to Computer Design

Intermediate Code & Local Optimizations

Lecture 25: Register Allocation

Code Generation. M.B.Chandak Lecture notes on Language Processing

USC 227 Office hours: 3-4 Monday and Wednesday CS553 Lecture 1 Introduction 4

Module 13: INTRODUCTION TO COMPILERS FOR HIGH PERFORMANCE COMPUTERS Lecture 25: Supercomputing Applications. The Lecture Contains: Loop Unswitching

Lecture Programming in C++ PART 1. By Assistant Professor Dr. Ali Kattan

Introduction to Compilers

Communicating with People (2.8)

DELHI PUBLIC SCHOOL TAPI

MIT Introduction to Program Analysis and Optimization. Martin Rinard Laboratory for Computer Science Massachusetts Institute of Technology

About the Authors... iii Introduction... xvii. Chapter 1: System Software... 1

Lecture 8: Induction Variable Optimizations

Project 2: How Parentheses and the Order of Operations Impose Structure on Expressions

CS143 - Written Assignment 4 Reference Solutions

EECS 583 Class 8 Classic Optimization

Unit 3. Operators. School of Science and Technology INTRODUCTION

MODEL ANSWERS COMP36512, May 2016

Introduction to Programming in C Department of Computer Science and Engineering. Lecture No. #16 Loops: Matrix Using Nested for Loop

Transcription:

CS 403 Compiler Construction Lecture 10 Code Optimization [Based on Chapter 8.5, 9.1 of Aho2] 1 his Lecture 2 1

Remember: Phases of a Compiler his lecture: Code Optimization means floating point 3 What is Code Optimization? We have intermediate code, such as DAG and three address codes. Compiler sometimes construct DAG and three address code together one after another. rom DAG and three address codes, compiler performs code optimization. Code optimization is: Compiler deletes unnecessary codes, reduces number of operations. Compiler replaces slower codes with faster codes, and more here are two types of code optimizations: Local : Optimization within each basic blocks of intermediate codes Basic Block: A block of codes that run sequentially, that means no jump instruction Global : Optimization in between basic blocks here are many techniques to optimize intermediate codes 4 2

Local Optimization 5 echniques 1: inding and Eliminating Local Common Subexpression Local common subexpression: Codes that compute a value that has already computed rom a block of codes, compiler creates DAG again. During this construction, compiler looks for local common subexpression and eliminate them as follows: If compiler wants to create a node N that has same children, same order, same operator of another node M, then N and M are same. So, use M, no need to create N. 6 3

echniques 1: inding and Eliminating Local Common Subexpressions Example 1: Compiler constructs a DAG for the following block of three address codes and eliminates local common subexpressions as follows: Compiler Checks and Creates DAG, and Rewrites Code: Step by Step: irst line, compiler creates one internal node Second line, not same with first line. So, compiler creates another internal node hird line, looks same to first line, but actually not. Because, in the first line, children are b0, c0. But, in the third line, b has been changed, so it is no more b0. So, compiler creates a new internal node for this line. (with 0 means: initial value. Without 0 means: value changed) orth line, same as second line. Because, a is same a after second line and d is d0 in both places, their order is also same. So, compiler uses node b for d. Compiler writes the three address code again to get shorter codes. echniques 1: inding and Eliminating Local Common Subexpressions (Not always optimize) Example 2: Constructing DAG may not optimize always. or example, for the following block: Compiler Checks and Creates DAG, and Rewrites Code: Step by Step: irst line, compiler creates one internal node Second line, not same with first line. So, compiler creates another internal node hird line, not same with first or second line. So, compiler creates another new internal node. orth line: not same as first line, because b and c have been changed. So, compiler creates a new internal node. Now, if we construct DAG again, then we have four internal nodes, so we get same three address codes. BU, forth line is actually, e=b+c=(b0-d0)+(c0+d0)=b0+c0=irst line. So, e and a are same. So, forth line is not required. echnique 1 can not detect this. 4

echniques 2: Use of Algebraic Identities Many techniques here. Algebraic identities: Compiler applies following algebraic identities: hat means, x+0 can be replaced by x, x/1 can be replaced by x, etc Reduction in strength: Where possible, compiler applies following algebraic optimization to replace expensive operation by cheaper operations (* is cheaper than power, + is cheaper than *, * is cheaper than /, etc). Constant folding: Constant operations can be processed during compile time and replaced with constant values. Compiler does all of these in DAG and rewrites the code. Example: Next Slide 9 echniques 2: Use of Algebraic Identities Example: Compiler optimizes code for (x+0)*(4/2) as follows. Answer: he code is same as (x+0)*(4/2)= x*2 = x+x. So, compiler writes the three address code and DAG, then optimizes and finally rewrites as follows. t1 = x + 0 t2 = 4 / 2 t3 = t1 * t2 t1 = x + x Constant folding Algebraic identities Reduction in strength 10 5

echniques 2: Use of Algebraic Identities Replace >, <, = by - : >, < and == can be done by -, because is cheaper than >, <, ==. or example, x>y is same as x-y is positive, x<y is same as x-y is negative, x==y is same as xy is zero. Apply associativity to reduce code: or example, the following two codes are same because of associativity of +. Rearrangement of operations: Rearrange/simplify operations to reduce total number of operations. or example, x*y x*z x*(y-z) BU, be careful: We need to be careful when we do optimization in compiler design, because there may be error. or example, following is wrong if we want to reduce the third array operation A[i] by x, because j may be same as i, so A[i] may be changed. X = A[i]; A[j] = y; Z = A[i]; X = A[i]; A[j] = y; Z = x; 11 Global Optimization 12 6

Global Optimization echniques Similar to local optimization, there are many techniques for global optimizations as follows: Eliminate global common subexpressions Copy propagation Dead-code elimination Constant folding However, transformation or optimization should be semanticspreserving transformations, that means meaning should not be changed when moving from block to block. Global Optimization We shall explain global optimization with the following example. Left side: program code. Right side: three address code generated by compiler---divided into blocks. Program Code 1: 2: 3: 4: 5: 6: if t1 < v go to (4) 7: 8: 9: if t2 > v go to (7) 10: if i >= j go to (16) 11: x = a[i] 12: t3 = a[j] 13: a[i] = t3 14: a[j] = x 15: Go to (4) 16: x = a[i] 17: 18: 19: a[n] = x hree address code with blocks 7

Global Optimization low Diagram: he block by block execution sequence of the three address code if t1 < v go to if t2 > v go to if i >= j go to x = a[i] t3 = a[j] a[i] = t3 a[j] = x Go to x = a[i] a[n] = x echnique 1: Eliminate Global Common Subexpression Eliminate Global Common Subexpression: Remove repeated codes. Example: No change in i and the array a when exit. So, x=a[i] in and can be replaced by x=t1. his reduces one array indexing operation in. if t1 < v go to if t2 > v go to if i >= j go to x = a[i] t3 = a[j] a[i] = t3 a[j] = x Go to x = a[i] a[n] = x 8

echnique 1: Eliminate Global Common Subexpression if t1 < v go to Example: Similarly, no change in j and the array a when exit. So, t3=a[j] and a[i]=t3 in can be replaced by a[i]=t2.his reduces one line in. if t2 > v go to if i >= j go to t3 = a[j] a[i] = t3 a[j] = x Go to a[n] = x echnique 1: Eliminate Global Common Subexpression Example: But, be careful. Compiler cannot replace t4=a[n] in by t4=v of, as they look same. Because, a[n] can be changed at if the execution enters sometimes before it finally enters and i or j becomes n. if t1 < v go to if t2 > v go to if i >= j go to a[i] = t2 a[j] = x Go to a[n] = x 9

echnique 1: Eliminate Global Common Subexpression if t1 < v go to low diagram after performing echnique 1 if t2 > v go to if i >= j go to a[i] = t2 a[j] = x Go to a[n] = x Copy Propagation: If same operation again and again, then compiler copies the operation value to a temporary variable and use that variable afterwards. It will decrease variables and operations, and also will be useful later. Example: echnique 2: Copy Propagation In this example, one + operation is decreased. Moreover, if a and b are not required later, then they also can be deleted. So, this will reduce number of variables. Note: Compiler cannot write c = a or c = b to reduce code, because c may be anyone of a or b. 10

echnique 2: Copy Propagation (In our example) if t1 < v go to if t2 > v go to In our example: In we can replace a[n]=x by a[n]=t1. Similarly, in we can replace a[j]=x by a[j]=t1. if i >= j go to a[i] = t2 a[j] = x Go to a[n] = x echnique 2: Copy Propagation low diagram after copy propagation. It looks no improvement, because number of lines remain same in and. But it will help in the next technique (dead code elimination). See next slides. if t1 < v go to if t2 > v go to if i >= j go to a[i] = t2 a[j] = t1 Go to a[n] = t1 11

echnique 3: Dead Code Elimination Live and Dead Variables: If a variable is used in subsequent operations or blocks, then it is live variable. If it is not used afterwards, then it is called dead variable. Codes that compute the value of dead variables are called dead codes. Compiler deletes dead codes and dead variables. echnique 3: Dead Code Elimination Example: In our example, after echnique 2 applied (that means, after copy propagation applied), x and x=t1 in and become dead. So delete them. if t1 < v go to if t2 > v go to if i >= j go to a[i] = t2 a[j] = t1 Go to a[n] = t1 12

echnique 3: Dead Code Elimination if t1 < v go to Example: low chart after the deletion of dead variable and dead code. if t2 > v go to if i >= j go to a[i] = t2 a[j] = t1 Go to a[n] = t1 Code Motion: Loops take longer time, because they run again and again. So, try to reduce code inside the loop by taking code out of the loop as much as possible. his technique is called code motion. Example: echnique 4: Code Motion { } no change in limit { } no change in limit his minus operation executed many times here, actually as many times as the loop is executed Here we take this out of the loop. Now it is executed only one time here 13

echnique 5: Induction Variables and Reduction in Strength Induction variables: Variables whose values are computed inside loop in every iteration. Compiler tries to compute the values of induction variables by doing increment/decrement or by addition/subtraction Compiler avoids expensive operations, such as multiplication, division for computing the values of induction variables inside the loop. his technique is called reduction in strength. Example: for(i=0; i< limit, i++) { j = 4*i;... } j is an induction variable. * is expensive. j = -4; for(i=0; i< limit, i++) { j = j+4;... } + is less expensive. Loop functions same. 14