CS 403 Compiler Construction Lecture 10 Code Optimization [Based on Chapter 8.5, 9.1 of Aho2]

Size: px
Start display at page:

Download "CS 403 Compiler Construction Lecture 10 Code Optimization [Based on Chapter 8.5, 9.1 of Aho2]"

Transcription

1 CS 403 Compiler Construction Lecture 10 Code Optimization [Based on Chapter 8.5, 9.1 of Aho2] 1 his Lecture 2 1

2 Remember: Phases of a Compiler his lecture: Code Optimization means floating point 3 What is Code Optimization? We have intermediate code, such as DAG and three address codes. Compiler sometimes construct DAG and three address code together one after another. rom DAG and three address codes, compiler performs code optimization. Code optimization is: Compiler deletes unnecessary codes, reduces number of operations. Compiler replaces slower codes with faster codes, and more here are two types of code optimizations: Local : Optimization within each basic blocks of intermediate codes Basic Block: A block of codes that run sequentially, that means no jump instruction Global : Optimization in between basic blocks here are many techniques to optimize intermediate codes 4 2

3 Local Optimization 5 echniques 1: inding and Eliminating Local Common Subexpression Local common subexpression: Codes that compute a value that has already computed rom a block of codes, compiler creates DAG again. During this construction, compiler looks for local common subexpression and eliminate them as follows: If compiler wants to create a node N that has same children, same order, same operator of another node M, then N and M are same. So, use M, no need to create N. 6 3

4 echniques 1: inding and Eliminating Local Common Subexpressions Example 1: Compiler constructs a DAG for the following block of three address codes and eliminates local common subexpressions as follows: Compiler Checks and Creates DAG, and Rewrites Code: Step by Step: irst line, compiler creates one internal node Second line, not same with first line. So, compiler creates another internal node hird line, looks same to first line, but actually not. Because, in the first line, children are b0, c0. But, in the third line, b has been changed, so it is no more b0. So, compiler creates a new internal node for this line. (with 0 means: initial value. Without 0 means: value changed) orth line, same as second line. Because, a is same a after second line and d is d0 in both places, their order is also same. So, compiler uses node b for d. Compiler writes the three address code again to get shorter codes. echniques 1: inding and Eliminating Local Common Subexpressions (Not always optimize) Example 2: Constructing DAG may not optimize always. or example, for the following block: Compiler Checks and Creates DAG, and Rewrites Code: Step by Step: irst line, compiler creates one internal node Second line, not same with first line. So, compiler creates another internal node hird line, not same with first or second line. So, compiler creates another new internal node. orth line: not same as first line, because b and c have been changed. So, compiler creates a new internal node. Now, if we construct DAG again, then we have four internal nodes, so we get same three address codes. BU, forth line is actually, e=b+c=(b0-d0)+(c0+d0)=b0+c0=irst line. So, e and a are same. So, forth line is not required. echnique 1 can not detect this. 4

5 echniques 2: Use of Algebraic Identities Many techniques here. Algebraic identities: Compiler applies following algebraic identities: hat means, x+0 can be replaced by x, x/1 can be replaced by x, etc Reduction in strength: Where possible, compiler applies following algebraic optimization to replace expensive operation by cheaper operations (* is cheaper than power, + is cheaper than *, * is cheaper than /, etc). Constant folding: Constant operations can be processed during compile time and replaced with constant values. Compiler does all of these in DAG and rewrites the code. Example: Next Slide 9 echniques 2: Use of Algebraic Identities Example: Compiler optimizes code for (x+0)*(4/2) as follows. Answer: he code is same as (x+0)*(4/2)= x*2 = x+x. So, compiler writes the three address code and DAG, then optimizes and finally rewrites as follows. t1 = x + 0 t2 = 4 / 2 t3 = t1 * t2 t1 = x + x Constant folding Algebraic identities Reduction in strength 10 5

6 echniques 2: Use of Algebraic Identities Replace >, <, = by - : >, < and == can be done by -, because is cheaper than >, <, ==. or example, x>y is same as x-y is positive, x<y is same as x-y is negative, x==y is same as xy is zero. Apply associativity to reduce code: or example, the following two codes are same because of associativity of +. Rearrangement of operations: Rearrange/simplify operations to reduce total number of operations. or example, x*y x*z x*(y-z) BU, be careful: We need to be careful when we do optimization in compiler design, because there may be error. or example, following is wrong if we want to reduce the third array operation A[i] by x, because j may be same as i, so A[i] may be changed. X = A[i]; A[j] = y; Z = A[i]; X = A[i]; A[j] = y; Z = x; 11 Global Optimization 12 6

7 Global Optimization echniques Similar to local optimization, there are many techniques for global optimizations as follows: Eliminate global common subexpressions Copy propagation Dead-code elimination Constant folding However, transformation or optimization should be semanticspreserving transformations, that means meaning should not be changed when moving from block to block. Global Optimization We shall explain global optimization with the following example. Left side: program code. Right side: three address code generated by compiler---divided into blocks. Program Code 1: 2: 3: 4: 5: 6: if t1 < v go to (4) 7: 8: 9: if t2 > v go to (7) 10: if i >= j go to (16) 11: x = a[i] 12: t3 = a[j] 13: a[i] = t3 14: a[j] = x 15: Go to (4) 16: x = a[i] 17: 18: 19: a[n] = x hree address code with blocks 7

8 Global Optimization low Diagram: he block by block execution sequence of the three address code if t1 < v go to if t2 > v go to if i >= j go to x = a[i] t3 = a[j] a[i] = t3 a[j] = x Go to x = a[i] a[n] = x echnique 1: Eliminate Global Common Subexpression Eliminate Global Common Subexpression: Remove repeated codes. Example: No change in i and the array a when exit. So, x=a[i] in and can be replaced by x=t1. his reduces one array indexing operation in. if t1 < v go to if t2 > v go to if i >= j go to x = a[i] t3 = a[j] a[i] = t3 a[j] = x Go to x = a[i] a[n] = x 8

9 echnique 1: Eliminate Global Common Subexpression if t1 < v go to Example: Similarly, no change in j and the array a when exit. So, t3=a[j] and a[i]=t3 in can be replaced by a[i]=t2.his reduces one line in. if t2 > v go to if i >= j go to t3 = a[j] a[i] = t3 a[j] = x Go to a[n] = x echnique 1: Eliminate Global Common Subexpression Example: But, be careful. Compiler cannot replace t4=a[n] in by t4=v of, as they look same. Because, a[n] can be changed at if the execution enters sometimes before it finally enters and i or j becomes n. if t1 < v go to if t2 > v go to if i >= j go to a[i] = t2 a[j] = x Go to a[n] = x 9

10 echnique 1: Eliminate Global Common Subexpression if t1 < v go to low diagram after performing echnique 1 if t2 > v go to if i >= j go to a[i] = t2 a[j] = x Go to a[n] = x Copy Propagation: If same operation again and again, then compiler copies the operation value to a temporary variable and use that variable afterwards. It will decrease variables and operations, and also will be useful later. Example: echnique 2: Copy Propagation In this example, one + operation is decreased. Moreover, if a and b are not required later, then they also can be deleted. So, this will reduce number of variables. Note: Compiler cannot write c = a or c = b to reduce code, because c may be anyone of a or b. 10

11 echnique 2: Copy Propagation (In our example) if t1 < v go to if t2 > v go to In our example: In we can replace a[n]=x by a[n]=t1. Similarly, in we can replace a[j]=x by a[j]=t1. if i >= j go to a[i] = t2 a[j] = x Go to a[n] = x echnique 2: Copy Propagation low diagram after copy propagation. It looks no improvement, because number of lines remain same in and. But it will help in the next technique (dead code elimination). See next slides. if t1 < v go to if t2 > v go to if i >= j go to a[i] = t2 a[j] = t1 Go to a[n] = t1 11

12 echnique 3: Dead Code Elimination Live and Dead Variables: If a variable is used in subsequent operations or blocks, then it is live variable. If it is not used afterwards, then it is called dead variable. Codes that compute the value of dead variables are called dead codes. Compiler deletes dead codes and dead variables. echnique 3: Dead Code Elimination Example: In our example, after echnique 2 applied (that means, after copy propagation applied), x and x=t1 in and become dead. So delete them. if t1 < v go to if t2 > v go to if i >= j go to a[i] = t2 a[j] = t1 Go to a[n] = t1 12

13 echnique 3: Dead Code Elimination if t1 < v go to Example: low chart after the deletion of dead variable and dead code. if t2 > v go to if i >= j go to a[i] = t2 a[j] = t1 Go to a[n] = t1 Code Motion: Loops take longer time, because they run again and again. So, try to reduce code inside the loop by taking code out of the loop as much as possible. his technique is called code motion. Example: echnique 4: Code Motion { } no change in limit { } no change in limit his minus operation executed many times here, actually as many times as the loop is executed Here we take this out of the loop. Now it is executed only one time here 13

14 echnique 5: Induction Variables and Reduction in Strength Induction variables: Variables whose values are computed inside loop in every iteration. Compiler tries to compute the values of induction variables by doing increment/decrement or by addition/subtraction Compiler avoids expensive operations, such as multiplication, division for computing the values of induction variables inside the loop. his technique is called reduction in strength. Example: for(i=0; i< limit, i++) { j = 4*i;... } j is an induction variable. * is expensive. j = -4; for(i=0; i< limit, i++) { j = j+4;... } + is less expensive. Loop functions same. 14

COMS W4115 Programming Languages and Translators Lecture 21: Code Optimization April 15, 2013

COMS W4115 Programming Languages and Translators Lecture 21: Code Optimization April 15, 2013 1 COMS W4115 Programming Languages and Translators Lecture 21: Code Optimization April 15, 2013 Lecture Outline 1. Code optimization strategies 2. Peephole optimization 3. Common subexpression elimination

More information

CSE443 Compilers. Dr. Carl Alphonce 343 Davis Hall

CSE443 Compilers. Dr. Carl Alphonce 343 Davis Hall CSE443 Compilers Dr. Carl Alphonce alphonce@buffalo.edu 343 Davis Hall http://www.cse.buffalo.edu/faculty/alphonce/sp17/cse443/index.php https://piazza.com/class/iybn4ndqa1s3ei Announcements Grading survey

More information

More Code Generation and Optimization. Pat Morin COMP 3002

More Code Generation and Optimization. Pat Morin COMP 3002 More Code Generation and Optimization Pat Morin COMP 3002 Outline DAG representation of basic blocks Peephole optimization Register allocation by graph coloring 2 Basic Blocks as DAGs 3 Basic Blocks as

More information

Code optimization. Have we achieved optimal code? Impossible to answer! We make improvements to the code. Aim: faster code and/or less space

Code optimization. Have we achieved optimal code? Impossible to answer! We make improvements to the code. Aim: faster code and/or less space Code optimization Have we achieved optimal code? Impossible to answer! We make improvements to the code Aim: faster code and/or less space Types of optimization machine-independent In source code or internal

More information

Goals of Program Optimization (1 of 2)

Goals of Program Optimization (1 of 2) Goals of Program Optimization (1 of 2) Goal: Improve program performance within some constraints Ask Three Key Questions for Every Optimization 1. Is it legal? 2. Is it profitable? 3. Is it compile-time

More information

Machine-Independent Optimizations

Machine-Independent Optimizations Chapter 9 Machine-Independent Optimizations High-level language constructs can introduce substantial run-time overhead if we naively translate each construct independently into machine code. This chapter

More information

Induction Variable Identification (cont)

Induction Variable Identification (cont) Loop Invariant Code Motion Last Time Uses of SSA: reaching constants, dead-code elimination, induction variable identification Today Finish up induction variable identification Loop invariant code motion

More information

CODE GENERATION Monday, May 31, 2010

CODE GENERATION Monday, May 31, 2010 CODE GENERATION memory management returned value actual parameters commonly placed in registers (when possible) optional control link optional access link saved machine status local data temporaries A.R.

More information

Compiler Optimizations. Chapter 8, Section 8.5 Chapter 9, Section 9.1.7

Compiler Optimizations. Chapter 8, Section 8.5 Chapter 9, Section 9.1.7 Compiler Optimizations Chapter 8, Section 8.5 Chapter 9, Section 9.1.7 2 Local vs. Global Optimizations Local: inside a single basic block Simple forms of common subexpression elimination, dead code elimination,

More information

7. Optimization! Prof. O. Nierstrasz! Lecture notes by Marcus Denker!

7. Optimization! Prof. O. Nierstrasz! Lecture notes by Marcus Denker! 7. Optimization! Prof. O. Nierstrasz! Lecture notes by Marcus Denker! Roadmap > Introduction! > Optimizations in the Back-end! > The Optimizer! > SSA Optimizations! > Advanced Optimizations! 2 Literature!

More information

Topic 9: Control Flow

Topic 9: Control Flow Topic 9: Control Flow COS 320 Compiling Techniques Princeton University Spring 2016 Lennart Beringer 1 The Front End The Back End (Intel-HP codename for Itanium ; uses compiler to identify parallelism)

More information

Compiler Optimizations. Chapter 8, Section 8.5 Chapter 9, Section 9.1.7

Compiler Optimizations. Chapter 8, Section 8.5 Chapter 9, Section 9.1.7 Compiler Optimizations Chapter 8, Section 8.5 Chapter 9, Section 9.1.7 2 Local vs. Global Optimizations Local: inside a single basic block Simple forms of common subexpression elimination, dead code elimination,

More information

Lecture Notes on Loop Optimizations

Lecture Notes on Loop Optimizations Lecture Notes on Loop Optimizations 15-411: Compiler Design Frank Pfenning Lecture 17 October 22, 2013 1 Introduction Optimizing loops is particularly important in compilation, since loops (and in particular

More information

Compiler Optimization Techniques

Compiler Optimization Techniques Compiler Optimization Techniques Department of Computer Science, Faculty of ICT February 5, 2014 Introduction Code optimisations usually involve the replacement (transformation) of code from one sequence

More information

COMPILER DESIGN - CODE OPTIMIZATION

COMPILER DESIGN - CODE OPTIMIZATION COMPILER DESIGN - CODE OPTIMIZATION http://www.tutorialspoint.com/compiler_design/compiler_design_code_optimization.htm Copyright tutorialspoint.com Optimization is a program transformation technique,

More information

What Do Compilers Do? How Can the Compiler Improve Performance? What Do We Mean By Optimization?

What Do Compilers Do? How Can the Compiler Improve Performance? What Do We Mean By Optimization? What Do Compilers Do? Lecture 1 Introduction I What would you get out of this course? II Structure of a Compiler III Optimization Example Reference: Muchnick 1.3-1.5 1. Translate one language into another

More information

Group B Assignment 8. Title of Assignment: Problem Definition: Code optimization using DAG Perquisite: Lex, Yacc, Compiler Construction

Group B Assignment 8. Title of Assignment: Problem Definition: Code optimization using DAG Perquisite: Lex, Yacc, Compiler Construction Group B Assignment 8 Att (2) Perm(3) Oral(5) Total(10) Sign Title of Assignment: Code optimization using DAG. 8.1.1 Problem Definition: Code optimization using DAG. 8.1.2 Perquisite: Lex, Yacc, Compiler

More information

A Bad Name. CS 2210: Optimization. Register Allocation. Optimization. Reaching Definitions. Dataflow Analyses 4/10/2013

A Bad Name. CS 2210: Optimization. Register Allocation. Optimization. Reaching Definitions. Dataflow Analyses 4/10/2013 A Bad Name Optimization is the process by which we turn a program into a better one, for some definition of better. CS 2210: Optimization This is impossible in the general case. For instance, a fully optimizing

More information

CS153: Compilers Lecture 15: Local Optimization

CS153: Compilers Lecture 15: Local Optimization CS153: Compilers Lecture 15: Local Optimization Stephen Chong https://www.seas.harvard.edu/courses/cs153 Announcements Project 4 out Due Thursday Oct 25 (2 days) Project 5 out Due Tuesday Nov 13 (21 days)

More information

Compiler Optimization

Compiler Optimization Compiler Optimization The compiler translates programs written in a high-level language to assembly language code Assembly language code is translated to object code by an assembler Object code modules

More information

CS 701. Class Meets. Instructor. Teaching Assistant. Key Dates. Charles N. Fischer. Fall Tuesdays & Thursdays, 11:00 12: Engineering Hall

CS 701. Class Meets. Instructor. Teaching Assistant. Key Dates. Charles N. Fischer. Fall Tuesdays & Thursdays, 11:00 12: Engineering Hall CS 701 Charles N. Fischer Class Meets Tuesdays & Thursdays, 11:00 12:15 2321 Engineering Hall Fall 2003 Instructor http://www.cs.wisc.edu/~fischer/cs703.html Charles N. Fischer 5397 Computer Sciences Telephone:

More information

Loop Invariant Code Motion. Background: ud- and du-chains. Upward Exposed Uses. Identifying Loop Invariant Code. Last Time Control flow analysis

Loop Invariant Code Motion. Background: ud- and du-chains. Upward Exposed Uses. Identifying Loop Invariant Code. Last Time Control flow analysis Loop Invariant Code Motion Loop Invariant Code Motion Background: ud- and du-chains Last Time Control flow analysis Today Loop invariant code motion ud-chains A ud-chain connects a use of a variable to

More information

Compiler Construction SMD163. Understanding Optimization: Optimization is not Magic: Goals of Optimization: Lecture 11: Introduction to optimization

Compiler Construction SMD163. Understanding Optimization: Optimization is not Magic: Goals of Optimization: Lecture 11: Introduction to optimization Compiler Construction SMD163 Understanding Optimization: Lecture 11: Introduction to optimization Viktor Leijon & Peter Jonsson with slides by Johan Nordlander Contains material generously provided by

More information

Compiler Theory. (Intermediate Code Generation Abstract S yntax + 3 Address Code)

Compiler Theory. (Intermediate Code Generation Abstract S yntax + 3 Address Code) Compiler Theory (Intermediate Code Generation Abstract S yntax + 3 Address Code) 006 Why intermediate code? Details of the source language are confined to the frontend (analysis phase) of a compiler, while

More information

Introduction to Code Optimization. Lecture 36: Local Optimization. Basic Blocks. Basic-Block Example

Introduction to Code Optimization. Lecture 36: Local Optimization. Basic Blocks. Basic-Block Example Lecture 36: Local Optimization [Adapted from notes by R. Bodik and G. Necula] Introduction to Code Optimization Code optimization is the usual term, but is grossly misnamed, since code produced by optimizers

More information

Lecture Outline. Intermediate code Intermediate Code & Local Optimizations. Local optimizations. Lecture 14. Next time: global optimizations

Lecture Outline. Intermediate code Intermediate Code & Local Optimizations. Local optimizations. Lecture 14. Next time: global optimizations Lecture Outline Intermediate code Intermediate Code & Local Optimizations Lecture 14 Local optimizations Next time: global optimizations Prof. Aiken CS 143 Lecture 14 1 Prof. Aiken CS 143 Lecture 14 2

More information

Compiler Optimization Intermediate Representation

Compiler Optimization Intermediate Representation Compiler Optimization Intermediate Representation Virendra Singh Associate Professor Computer Architecture and Dependable Systems Lab Department of Electrical Engineering Indian Institute of Technology

More information

CSC D70: Compiler Optimization

CSC D70: Compiler Optimization CSC D70: Compiler Optimization Prof. Gennady Pekhimenko University of Toronto Winter 2018 The content of this lecture is adapted from the lectures of Todd Mowry and Phillip Gibbons CSC D70: Compiler Optimization

More information

Optimization Prof. James L. Frankel Harvard University

Optimization Prof. James L. Frankel Harvard University Optimization Prof. James L. Frankel Harvard University Version of 4:24 PM 1-May-2018 Copyright 2018, 2016, 2015 James L. Frankel. All rights reserved. Reasons to Optimize Reduce execution time Reduce memory

More information

Compiler Optimization and Code Generation

Compiler Optimization and Code Generation Compiler Optimization and Code Generation Professor: Sc.D., Professor Vazgen Melikyan 1 Course Overview Introduction: Overview of Optimizations 1 lecture Intermediate-Code Generation 2 lectures Machine-Independent

More information

Loop Optimizations. Outline. Loop Invariant Code Motion. Induction Variables. Loop Invariant Code Motion. Loop Invariant Code Motion

Loop Optimizations. Outline. Loop Invariant Code Motion. Induction Variables. Loop Invariant Code Motion. Loop Invariant Code Motion Outline Loop Optimizations Induction Variables Recognition Induction Variables Combination of Analyses Copyright 2010, Pedro C Diniz, all rights reserved Students enrolled in the Compilers class at the

More information

Languages and Compiler Design II IR Code Optimization

Languages and Compiler Design II IR Code Optimization Languages and Compiler Design II IR Code Optimization Material provided by Prof. Jingke Li Stolen with pride and modified by Herb Mayer PSU Spring 2010 rev.: 4/16/2010 PSU CS322 HM 1 Agenda IR Optimization

More information

Intermediate Code & Local Optimizations. Lecture 20

Intermediate Code & Local Optimizations. Lecture 20 Intermediate Code & Local Optimizations Lecture 20 Lecture Outline Intermediate code Local optimizations Next time: global optimizations 2 Code Generation Summary We have discussed Runtime organization

More information

Compiler Design. Fall Control-Flow Analysis. Prof. Pedro C. Diniz

Compiler Design. Fall Control-Flow Analysis. Prof. Pedro C. Diniz Compiler Design Fall 2015 Control-Flow Analysis Sample Exercises and Solutions Prof. Pedro C. Diniz USC / Information Sciences Institute 4676 Admiralty Way, Suite 1001 Marina del Rey, California 90292

More information

Code Optimization. Code Optimization

Code Optimization. Code Optimization 161 Code Optimization Code Optimization 162 Two steps: 1. Analysis (to uncover optimization opportunities) 2. Optimizing transformation Optimization: must be semantically correct. shall improve program

More information

G Compiler Construction Lecture 12: Code Generation I. Mohamed Zahran (aka Z)

G Compiler Construction Lecture 12: Code Generation I. Mohamed Zahran (aka Z) G22.2130-001 Compiler Construction Lecture 12: Code Generation I Mohamed Zahran (aka Z) mzahran@cs.nyu.edu semantically equivalent + symbol table Requirements Preserve semantic meaning of source program

More information

Compiler Passes. Optimization. The Role of the Optimizer. Optimizations. The Optimizer (or Middle End) Traditional Three-pass Compiler

Compiler Passes. Optimization. The Role of the Optimizer. Optimizations. The Optimizer (or Middle End) Traditional Three-pass Compiler Compiler Passes Analysis of input program (front-end) character stream Lexical Analysis Synthesis of output program (back-end) Intermediate Code Generation Optimization Before and after generating machine

More information

Running class Timing on Java HotSpot VM, 1

Running class Timing on Java HotSpot VM, 1 Compiler construction 2009 Lecture 3. A first look at optimization: Peephole optimization. A simple example A Java class public class A { public static int f (int x) { int r = 3; int s = r + 5; return

More information

Lecture 3 Local Optimizations, Intro to SSA

Lecture 3 Local Optimizations, Intro to SSA Lecture 3 Local Optimizations, Intro to SSA I. Basic blocks & Flow graphs II. Abstraction 1: DAG III. Abstraction 2: Value numbering IV. Intro to SSA ALSU 8.4-8.5, 6.2.4 Phillip B. Gibbons 15-745: Local

More information

Tour of common optimizations

Tour of common optimizations Tour of common optimizations Simple example foo(z) { x := 3 + 6; y := x 5 return z * y } Simple example foo(z) { x := 3 + 6; y := x 5; return z * y } x:=9; Applying Constant Folding Simple example foo(z)

More information

Calvin Lin The University of Texas at Austin

Calvin Lin The University of Texas at Austin Loop Invariant Code Motion Last Time SSA Today Loop invariant code motion Reuse optimization Next Time More reuse optimization Common subexpression elimination Partial redundancy elimination February 23,

More information

Compiler Construction 2010/2011 Loop Optimizations

Compiler Construction 2010/2011 Loop Optimizations Compiler Construction 2010/2011 Loop Optimizations Peter Thiemann January 25, 2011 Outline 1 Loop Optimizations 2 Dominators 3 Loop-Invariant Computations 4 Induction Variables 5 Array-Bounds Checks 6

More information

Review. Pat Morin COMP 3002

Review. Pat Morin COMP 3002 Review Pat Morin COMP 3002 What is a Compiler A compiler translates from a source language S to a target language T while preserving the meaning of the input 2 Structure of a Compiler program text syntactic

More information

Compiler Construction 2016/2017 Loop Optimizations

Compiler Construction 2016/2017 Loop Optimizations Compiler Construction 2016/2017 Loop Optimizations Peter Thiemann January 16, 2017 Outline 1 Loops 2 Dominators 3 Loop-Invariant Computations 4 Induction Variables 5 Array-Bounds Checks 6 Loop Unrolling

More information

Lecture Notes on Value Propagation

Lecture Notes on Value Propagation Lecture Notes on Value Propagation 15-411: Compiler Design Frank Pfenning, Rob Simmons Lecture 17 October 27, 2015 1 Introduction The opportunities for optimizations 1 in compiler-generated code are plentiful.

More information

Compiler construction 2009

Compiler construction 2009 Compiler construction 2009 Lecture 3 JVM and optimization. A first look at optimization: Peephole optimization. A simple example A Java class public class A { public static int f (int x) { int r = 3; int

More information

8 Optimisation. 8.1 Introduction to Optimisation

8 Optimisation. 8.1 Introduction to Optimisation 8 8.1 Introduction to 8.1 Introduction refers to compiler strategies which result in generated object code which is more efficient, either in terms of speed or memory use. can be performed in two locations:

More information

Compiler Design Prof. Y. N. Srikant Department of Computer Science and Automation Indian Institute of Science, Bangalore

Compiler Design Prof. Y. N. Srikant Department of Computer Science and Automation Indian Institute of Science, Bangalore Compiler Design Prof. Y. N. Srikant Department of Computer Science and Automation Indian Institute of Science, Bangalore Module No. # 10 Lecture No. # 16 Machine-Independent Optimizations Welcome to the

More information

CMSC 611: Advanced Computer Architecture

CMSC 611: Advanced Computer Architecture CMSC 611: Advanced Computer Architecture Compilers Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier Science

More information

Comp 204: Computer Systems and Their Implementation. Lecture 22: Code Generation and Optimisation

Comp 204: Computer Systems and Their Implementation. Lecture 22: Code Generation and Optimisation Comp 204: Computer Systems and Their Implementation Lecture 22: Code Generation and Optimisation 1 Today Code generation Three address code Code optimisation Techniques Classification of optimisations

More information

Optimization. ASU Textbook Chapter 9. Tsan-sheng Hsu.

Optimization. ASU Textbook Chapter 9. Tsan-sheng Hsu. Optimization ASU Textbook Chapter 9 Tsan-sheng Hsu tshsu@iis.sinica.edu.tw http://www.iis.sinica.edu.tw/~tshsu 1 Introduction For some compiler, the intermediate code is a pseudo code of a virtual machine.

More information

Introduction to Machine-Independent Optimizations - 1

Introduction to Machine-Independent Optimizations - 1 Introduction to Machine-Independent Optimizations - 1 Department of Computer Science and Automation Indian Institute of Science Bangalore 560 012 NPTEL Course on Principles of Compiler Design Outline of

More information

Sardar Vallabhbhai Patel Institute of Technology (SVIT), Vasad M.C.A. Department COSMOS LECTURE SERIES ( ) (ODD) Code Optimization

Sardar Vallabhbhai Patel Institute of Technology (SVIT), Vasad M.C.A. Department COSMOS LECTURE SERIES ( ) (ODD) Code Optimization Sardar Vallabhbhai Patel Institute of Technology (SVIT), Vasad M.C.A. Department COSMOS LECTURE SERIES (2018-19) (ODD) Code Optimization Prof. Jonita Roman Date: 30/06/2018 Time: 9:45 to 10:45 Venue: MCA

More information

CS 6353 Compiler Construction, Homework #3

CS 6353 Compiler Construction, Homework #3 CS 6353 Compiler Construction, Homework #3 1. Consider the following attribute grammar for code generation regarding array references. (Note that the attribute grammar is the same as the one in the notes.

More information

Multi-dimensional Arrays

Multi-dimensional Arrays Multi-dimensional Arrays IL for Arrays & Local Optimizations Lecture 26 (Adapted from notes by R. Bodik and G. Necula) A 2D array is a 1D array of 1D arrays Java uses arrays of pointers to arrays for >1D

More information

CS577 Modern Language Processors. Spring 2018 Lecture Optimization

CS577 Modern Language Processors. Spring 2018 Lecture Optimization CS577 Modern Language Processors Spring 2018 Lecture Optimization 1 GENERATING BETTER CODE What does a conventional compiler do to improve quality of generated code? Eliminate redundant computation Move

More information

HPC VT Machine-dependent Optimization

HPC VT Machine-dependent Optimization HPC VT 2013 Machine-dependent Optimization Last time Choose good data structures Reduce number of operations Use cheap operations strength reduction Avoid too many small function calls inlining Use compiler

More information

CS202 Compiler Construction

CS202 Compiler Construction CS202 Compiler Construction April 17, 2003 CS 202-33 1 Today: more optimizations Loop optimizations: induction variables New DF analysis: available expressions Common subexpression elimination Copy propogation

More information

Principles of Compiler Design

Principles of Compiler Design Principles of Compiler Design Code Generation Compiler Lexical Analysis Syntax Analysis Semantic Analysis Source Program Token stream Abstract Syntax tree Intermediate Code Code Generation Target Program

More information

CPSC 510 (Fall 2007, Winter 2008) Compiler Construction II

CPSC 510 (Fall 2007, Winter 2008) Compiler Construction II CPSC 510 (Fall 2007, Winter 2008) Compiler Construction II Robin Cockett Department of Computer Science University of Calgary Alberta, Canada robin@cpsc.ucalgary.ca Sept. 2007 Introduction to the course

More information

Redundant Computation Elimination Optimizations. Redundancy Elimination. Value Numbering CS2210

Redundant Computation Elimination Optimizations. Redundancy Elimination. Value Numbering CS2210 Redundant Computation Elimination Optimizations CS2210 Lecture 20 Redundancy Elimination Several categories: Value Numbering local & global Common subexpression elimination (CSE) local & global Loop-invariant

More information

CS 403 Compiler Construction Lecture 8 Syntax Tree and Intermediate Code Generation [Based on Chapter 6 of Aho2] This Lecture

CS 403 Compiler Construction Lecture 8 Syntax Tree and Intermediate Code Generation [Based on Chapter 6 of Aho2] This Lecture CS 403 Compiler Construction Lecture 8 Snta Tree and Intermediate Code Generation [Based on Chapter 6 of Aho2] 1 This Lecture 2 1 Remember: Phases of a Compiler This lecture: Intermediate Code This lecture:

More information

Middle End. Code Improvement (or Optimization) Analyzes IR and rewrites (or transforms) IR Primary goal is to reduce running time of the compiled code

Middle End. Code Improvement (or Optimization) Analyzes IR and rewrites (or transforms) IR Primary goal is to reduce running time of the compiled code Traditional Three-pass Compiler Source Code Front End IR Middle End IR Back End Machine code Errors Code Improvement (or Optimization) Analyzes IR and rewrites (or transforms) IR Primary goal is to reduce

More information

Compiler Code Generation COMP360

Compiler Code Generation COMP360 Compiler Code Generation COMP360 Students who acquire large debts putting themselves through school are unlikely to think about changing society. When you trap people in a system of debt, they can t afford

More information

Intermediate representation

Intermediate representation Intermediate representation Goals: encode knowledge about the program facilitate analysis facilitate retargeting facilitate optimization scanning parsing HIR semantic analysis HIR intermediate code gen.

More information

Compiler Design and Construction Optimization

Compiler Design and Construction Optimization Compiler Design and Construction Optimization Generating Code via Macro Expansion Macroexpand each IR tuple or subtree A := B+C; D := A * C; lw $t0, B, lw $t1, C, add $t2, $t0, $t1 sw $t2, A lw $t0, A

More information

Algebra 1 Review. Properties of Real Numbers. Algebraic Expressions

Algebra 1 Review. Properties of Real Numbers. Algebraic Expressions Algebra 1 Review Properties of Real Numbers Algebraic Expressions Real Numbers Natural Numbers: 1, 2, 3, 4,.. Numbers used for counting Whole Numbers: 0, 1, 2, 3, 4,.. Natural Numbers and 0 Integers:,

More information

Static Single Assignment Form in the COINS Compiler Infrastructure

Static Single Assignment Form in the COINS Compiler Infrastructure Static Single Assignment Form in the COINS Compiler Infrastructure Masataka Sassa (Tokyo Institute of Technology) Background Static single assignment (SSA) form facilitates compiler optimizations. Compiler

More information

What Compilers Can and Cannot Do. Saman Amarasinghe Fall 2009

What Compilers Can and Cannot Do. Saman Amarasinghe Fall 2009 What Compilers Can and Cannot Do Saman Amarasinghe Fall 009 Optimization Continuum Many examples across the compilation pipeline Static Dynamic Program Compiler Linker Loader Runtime System Optimization

More information

UNIT-V. Symbol Table & Run-Time Environments Symbol Table

UNIT-V. Symbol Table & Run-Time Environments Symbol Table 1 P a g e UNIT-V Symbol Table & Run-Time Environments Symbol Table Symbol table is a data structure used by compiler to keep track of semantics of variable. i.e. symbol table stores the information about

More information

Building a Runnable Program and Code Improvement. Dario Marasco, Greg Klepic, Tess DiStefano

Building a Runnable Program and Code Improvement. Dario Marasco, Greg Klepic, Tess DiStefano Building a Runnable Program and Code Improvement Dario Marasco, Greg Klepic, Tess DiStefano Building a Runnable Program Review Front end code Source code analysis Syntax tree Back end code Target code

More information

Compiler construction in4303 lecture 9

Compiler construction in4303 lecture 9 Compiler construction in4303 lecture 9 Code generation Chapter 4.2.5, 4.2.7, 4.2.11 4.3 Overview Code generation for basic blocks instruction selection:[burs] register allocation: graph coloring instruction

More information

Code Generation: Integrated Instruction Selection and Register Allocation Algorithms

Code Generation: Integrated Instruction Selection and Register Allocation Algorithms Code Generation: Integrated Instruction Selection and Register Allocation Algorithms (www.cse.iitb.ac.in/ as) Department of Computer Science and Engineering, Indian Institute of Technology, Bombay February

More information

Intermediate Representations. Reading & Topics. Intermediate Representations CS2210

Intermediate Representations. Reading & Topics. Intermediate Representations CS2210 Intermediate Representations CS2210 Lecture 11 Reading & Topics Muchnick: chapter 6 Topics today: Intermediate representations Automatic code generation with pattern matching Optimization Overview Control

More information

A main goal is to achieve a better performance. Code Optimization. Chapter 9

A main goal is to achieve a better performance. Code Optimization. Chapter 9 1 A main goal is to achieve a better performance Code Optimization Chapter 9 2 A main goal is to achieve a better performance source Code Front End Intermediate Code Code Gen target Code user Machineindependent

More information

COMPILER CONSTRUCTION (Intermediate Code: three address, control flow)

COMPILER CONSTRUCTION (Intermediate Code: three address, control flow) COMPILER CONSTRUCTION (Intermediate Code: three address, control flow) Prof. K R Chowdhary Email: kr.chowdhary@jietjodhpur.ac.in Campus Director, JIET, Jodhpur Thursday 18 th October, 2018 kr chowdhary

More information

Using Static Single Assignment Form

Using Static Single Assignment Form Using Static Single Assignment Form Announcements Project 2 schedule due today HW1 due Friday Last Time SSA Technicalities Today Constant propagation Loop invariant code motion Induction variables CS553

More information

Code Generation (#')! *+%,-,.)" !"#$%! &$' (#')! 20"3)% +"#3"0- /)$)"0%#"

Code Generation (#')! *+%,-,.) !#$%! &$' (#')! 203)% +#30- /)$)0%# Code Generation!"#$%! &$' 1$%)"-)',0%)! (#')! (#')! *+%,-,.)" 1$%)"-)',0%)! (#')! (#')! /)$)"0%#" 20"3)% +"#3"0- Memory Management returned value actual parameters commonly placed in registers (when possible)

More information

University of Technology Department of Computer Sciences. Final Examination st Term. Subject:Compilers Design

University of Technology Department of Computer Sciences. Final Examination st Term. Subject:Compilers Design Subject:Compilers Design Division: All Branches Examiner:Dr. Abeer Tariq University of Technology Department of Computer Sciences 2102 Final Examination 2011-2012 1 st Term Year:Third Time: 3 Hours Date:

More information

Midterm 2. CMSC 430 Introduction to Compilers Fall Instructions Total 100. Name: November 11, 2015

Midterm 2. CMSC 430 Introduction to Compilers Fall Instructions Total 100. Name: November 11, 2015 Name: Midterm 2 CMSC 430 Introduction to Compilers Fall 2015 November 11, 2015 Instructions This exam contains 8 pages, including this one. Make sure you have all the pages. Write your name on the top

More information

Introduction to Optimization Local Value Numbering

Introduction to Optimization Local Value Numbering COMP 506 Rice University Spring 2018 Introduction to Optimization Local Value Numbering source IR IR target code Front End Optimizer Back End code Copyright 2018, Keith D. Cooper & Linda Torczon, all rights

More information

ICS 252 Introduction to Computer Design

ICS 252 Introduction to Computer Design ICS 252 Introduction to Computer Design Lecture 3 Fall 2006 Eli Bozorgzadeh Computer Science Department-UCI System Model According to Abstraction level Architectural, logic and geometrical View Behavioral,

More information

Intermediate Code & Local Optimizations

Intermediate Code & Local Optimizations Lecture Outline Intermediate Code & Local Optimizations Intermediate code Local optimizations Compiler Design I (2011) 2 Code Generation Summary We have so far discussed Runtime organization Simple stack

More information

Lecture 25: Register Allocation

Lecture 25: Register Allocation Lecture 25: Register Allocation [Adapted from notes by R. Bodik and G. Necula] Topics: Memory Hierarchy Management Register Allocation: Register interference graph Graph coloring heuristics Spilling Cache

More information

Code Generation. M.B.Chandak Lecture notes on Language Processing

Code Generation. M.B.Chandak Lecture notes on Language Processing Code Generation M.B.Chandak Lecture notes on Language Processing Code Generation It is final phase of compilation. Input from ICG and output in the form of machine code of target machine. Major issues

More information

USC 227 Office hours: 3-4 Monday and Wednesday CS553 Lecture 1 Introduction 4

USC 227 Office hours: 3-4 Monday and Wednesday  CS553 Lecture 1 Introduction 4 CS553 Compiler Construction Instructor: URL: Michelle Strout mstrout@cs.colostate.edu USC 227 Office hours: 3-4 Monday and Wednesday http://www.cs.colostate.edu/~cs553 CS553 Lecture 1 Introduction 3 Plan

More information

Module 13: INTRODUCTION TO COMPILERS FOR HIGH PERFORMANCE COMPUTERS Lecture 25: Supercomputing Applications. The Lecture Contains: Loop Unswitching

Module 13: INTRODUCTION TO COMPILERS FOR HIGH PERFORMANCE COMPUTERS Lecture 25: Supercomputing Applications. The Lecture Contains: Loop Unswitching The Lecture Contains: Loop Unswitching Supercomputing Applications Programming Paradigms Important Problems Scheduling Sources and Types of Parallelism Model of Compiler Code Optimization Data Dependence

More information

Lecture Programming in C++ PART 1. By Assistant Professor Dr. Ali Kattan

Lecture Programming in C++ PART 1. By Assistant Professor Dr. Ali Kattan Lecture 08-1 Programming in C++ PART 1 By Assistant Professor Dr. Ali Kattan 1 The Conditional Operator The conditional operator is similar to the if..else statement but has a shorter format. This is useful

More information

Introduction to Compilers

Introduction to Compilers Introduction to Compilers Compilers are language translators input: program in one language output: equivalent program in another language Introduction to Compilers Two types Compilers offline Data Program

More information

Communicating with People (2.8)

Communicating with People (2.8) Communicating with People (2.8) For communication Use characters and strings Characters 8-bit (one byte) data for ASCII lb $t0, 0($sp) ; load byte Load a byte from memory, placing it in the rightmost 8-bits

More information

DELHI PUBLIC SCHOOL TAPI

DELHI PUBLIC SCHOOL TAPI Loops Chapter-1 There may be a situation, when you need to execute a block of code several number of times. In general, statements are executed sequentially: The first statement in a function is executed

More information

MIT Introduction to Program Analysis and Optimization. Martin Rinard Laboratory for Computer Science Massachusetts Institute of Technology

MIT Introduction to Program Analysis and Optimization. Martin Rinard Laboratory for Computer Science Massachusetts Institute of Technology MIT 6.035 Introduction to Program Analysis and Optimization Martin Rinard Laboratory for Computer Science Massachusetts Institute of Technology Program Analysis Compile-time reasoning about run-time behavior

More information

About the Authors... iii Introduction... xvii. Chapter 1: System Software... 1

About the Authors... iii Introduction... xvii. Chapter 1: System Software... 1 Table of Contents About the Authors... iii Introduction... xvii Chapter 1: System Software... 1 1.1 Concept of System Software... 2 Types of Software Programs... 2 Software Programs and the Computing Machine...

More information

Lecture 8: Induction Variable Optimizations

Lecture 8: Induction Variable Optimizations Lecture 8: Induction Variable Optimizations I. Finding loops II. III. Overview of Induction Variable Optimizations Further details ALSU 9.1.8, 9.6, 9.8.1 Phillip B. Gibbons 15-745: Induction Variables

More information

Project 2: How Parentheses and the Order of Operations Impose Structure on Expressions

Project 2: How Parentheses and the Order of Operations Impose Structure on Expressions MAT 51 Wladis Project 2: How Parentheses and the Order of Operations Impose Structure on Expressions Parentheses show us how things should be grouped together. The sole purpose of parentheses in algebraic

More information

CS143 - Written Assignment 4 Reference Solutions

CS143 - Written Assignment 4 Reference Solutions CS143 - Written Assignment 4 Reference Solutions 1. Consider the following program in Cool, representing a slightly over-engineered implementation which calculates the factorial of 3 using an operator

More information

EECS 583 Class 8 Classic Optimization

EECS 583 Class 8 Classic Optimization EECS 583 Class 8 Classic Optimization University of Michigan October 3, 2011 Announcements & Reading Material Homework 2» Extend LLVM LICM optimization to perform speculative LICM» Due Friday, Nov 21,

More information

Unit 3. Operators. School of Science and Technology INTRODUCTION

Unit 3. Operators. School of Science and Technology INTRODUCTION INTRODUCTION Operators Unit 3 In the previous units (unit 1 and 2) you have learned about the basics of computer programming, different data types, constants, keywords and basic structure of a C program.

More information

MODEL ANSWERS COMP36512, May 2016

MODEL ANSWERS COMP36512, May 2016 MODEL ANSWERS COMP36512, May 2016 QUESTION 1: a) Clearly: 1-g, 2-d, 3-h, 4-e, 5-i, 6-a, 7-b, 8-c, 9-f. 0.5 marks for each correct answer rounded up as no halves are used. b) i) It has been mentioned in

More information

Introduction to Programming in C Department of Computer Science and Engineering. Lecture No. #16 Loops: Matrix Using Nested for Loop

Introduction to Programming in C Department of Computer Science and Engineering. Lecture No. #16 Loops: Matrix Using Nested for Loop Introduction to Programming in C Department of Computer Science and Engineering Lecture No. #16 Loops: Matrix Using Nested for Loop In this section, we will use the, for loop to code of the matrix problem.

More information