Grammars. CS434 Lecture 15 Spring 2005 Department of Computer Science University of Alabama Joel Jones

Similar documents
Context-sensitive Analysis

Context-sensitive Analysis

Syntactic Directed Translation

Context-sensitive Analysis. Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved.

There is a level of correctness that is deeper than grammar. There is a level of correctness that is deeper than grammar

Context-sensitive Analysis Part II

CS 406/534 Compiler Construction LR(1) Parsing and CSA

Syntactic Directed Translation

Context-sensitive Analysis Part II Chapter 4 (up to Section 4.3)

Syntax Directed Translation

Intermediate Representations

Parsing Wrapup. Roadmap (Where are we?) Last lecture Shift-reduce parser LR(1) parsing. This lecture LR(1) parsing

Parsing II Top-down parsing. Comp 412

Parsing III. CS434 Lecture 8 Spring 2005 Department of Computer Science University of Alabama Joel Jones

Syntax Analysis, VII One more LR(1) example, plus some more stuff. Comp 412 COMP 412 FALL Chapter 3 in EaC2e. target code.

Parsing. Note by Baris Aktemur: Our slides are adapted from Cooper and Torczon s slides that they prepared for COMP 412 at Rice.

COMP 181. Prelude. Prelude. Summary of parsing. A Hierarchy of Grammar Classes. More power? Syntax-directed translation. Analysis

Syntax-Directed Translation. CS Compiler Design. SDD and SDT scheme. Example: SDD vs SDT scheme infix to postfix trans

Syntax Analysis, III Comp 412

Syntax Analysis, III Comp 412

CS415 Compilers. Syntax Analysis. These slides are based on slides copyrighted by Keith Cooper, Ken Kennedy & Linda Torczon at Rice University

Syntax Analysis, V Bottom-up Parsing & The Magic of Handles Comp 412

CS 406/534 Compiler Construction Parsing Part I

CS 2210 Sample Midterm. 1. Determine if each of the following claims is true (T) or false (F).

Code Shape II Expressions & Assignment

Computing Inside The Parser Syntax-Directed Translation. Comp 412 COMP 412 FALL Chapter 4 in EaC2e. source code. IR IR target.

Parsing Part II (Top-down parsing, left-recursion removal)

Context-sensitive analysis. Semantic Processing. Alternatives for semantic processing. Context-sensitive analysis

Building a Parser Part III

CS415 Compilers. Lexical Analysis

Syntax-directed translation. Context-sensitive analysis. What context-sensitive questions might the compiler ask?

5. Semantic Analysis. Mircea Lungu Oscar Nierstrasz

CS415 Compilers Context-Sensitive Analysis Type checking Symbol tables

1. (a) What are the closure properties of Regular sets? Explain. (b) Briefly explain the logical phases of a compiler model. [8+8]

Code Shape Comp 412 COMP 412 FALL Chapters 4, 5, 6 & 7 in EaC2e. source code. IR IR target. code. Front End Optimizer Back End

Introduction to Parsing. Comp 412

Lexical Analysis - An Introduction. Lecture 4 Spring 2005 Department of Computer Science University of Alabama Joel Jones

CSE P 501 Compilers. Parsing & Context-Free Grammars Hal Perkins Spring UW CSE P 501 Spring 2018 C-1

5. Semantic Analysis. Mircea Lungu Oscar Nierstrasz

5. Semantic Analysis!

Lecture 7: Deterministic Bottom-Up Parsing

Lecture 8: Deterministic Bottom-Up Parsing

CS415 Compilers. Intermediate Represeation & Code Generation

Computing Inside The Parser Syntax-Directed Translation. Comp 412

CS415 Compilers. LR Parsing & Error Recovery

A left-sentential form is a sentential form that occurs in the leftmost derivation of some sentence.

Lecture 14: Parser Conflicts, Using Ambiguity, Error Recovery. Last modified: Mon Feb 23 10:05: CS164: Lecture #14 1

Syntax-Directed Translation

Bottom-up parsing. Bottom-Up Parsing. Recall. Goal: For a grammar G, withstartsymbols, any string α such that S α is called a sentential form

Gujarat Technological University Sankalchand Patel College of Engineering, Visnagar B.E. Semester VII (CE) July-Nov Compiler Design (170701)

EXAM. CS331 Compiler Design Spring Please read all instructions, including these, carefully

CSE P 501 Compilers. Parsing & Context-Free Grammars Hal Perkins Winter /15/ Hal Perkins & UW CSE C-1

CMPT 379 Compilers. Parse trees

Parsing II Top-down parsing. Comp 412

Parsing. Handle, viable prefix, items, closures, goto s LR(k): SLR(1), LR(1), LALR(1)

CS5363 Final Review. cs5363 1

CSCI Compiler Design

Generating Code for Assignment Statements back to work. Comp 412 COMP 412 FALL Chapters 4, 6 & 7 in EaC2e. source code. IR IR target.

Parsing Part II. (Ambiguity, Top-down parsing, Left-recursion Removal)

EDAN65: Compilers, Lecture 06 A LR parsing. Görel Hedin Revised:

Part III : Parsing. From Regular to Context-Free Grammars. Deriving a Parser from a Context-Free Grammar. Scanners and Parsers.

Action Table for CSX-Lite. LALR Parser Driver. Example of LALR(1) Parsing. GoTo Table for CSX-Lite

3. Parsing. Oscar Nierstrasz

Semantic analysis. Syntax tree is decorated with typing- and other context dependent

CSCI312 Principles of Programming Languages

Torben./Egidius Mogensen. Introduction. to Compiler Design. ^ Springer

CS /534 Compiler Construction University of Massachusetts Lowell

Semantic Analysis with Attribute Grammars Part 1

Compiler Construction

Semantic Analysis. CSE 307 Principles of Programming Languages Stony Brook University

Conflicts in LR Parsing and More LR Parsing Types

Naming in OOLs and Storage Layout Comp 412

Arrays and Functions

Instruction Selection, II Tree-pattern matching

SYED AMMAL ENGINEERING COLLEGE (An ISO 9001:2008 Certified Institution) Dr. E.M. Abdullah Campus, Ramanathapuram

EECS 6083 Intro to Parsing Context Free Grammars

Principle of Compilers Lecture IV Part 4: Syntactic Analysis. Alessandro Artale

CSE3322 Programming Languages and Implementation

QUESTIONS RELATED TO UNIT I, II And III

Syntax-Directed Translation

Semantic Analysis. Lecture 9. February 7, 2018

Summary: Semantic Analysis

CS415 Compilers Overview of the Course. These slides are based on slides copyrighted by Keith Cooper, Ken Kennedy & Linda Torczon at Rice University

Imperative Paradigm. Syntax. Role of Programming Languages. Expressions. Expressions. Role of Programming Languages. Symbols.

Error Detection in LALR Parsers. LALR is More Powerful. { b + c = a; } Eof. Expr Expr + id Expr id we can first match an id:

Parsing. Roadmap. > Context-free grammars > Derivations and precedence > Top-down parsing > Left-recursion > Look-ahead > Table-driven parsing

Instruction Selection: Preliminaries. Comp 412

CS415 Compilers. Procedure Abstractions. These slides are based on slides copyrighted by Keith Cooper, Ken Kennedy & Linda Torczon at Rice University

Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University have explicit

The Software Stack: From Assembly Language to Machine Code

CS606- compiler instruction Solved MCQS From Midterm Papers

LR Parsing Techniques

Building a Parser III. CS164 3:30-5:00 TT 10 Evans. Prof. Bodik CS 164 Lecture 6 1

CS 406/534 Compiler Construction Putting It All Together

GUJARAT TECHNOLOGICAL UNIVERSITY

The View from 35,000 Feet

Compiler Construction

MODEL ANSWERS COMP36512, May 2016

Intermediate Representations

UNIVERSITY OF CALIFORNIA

Transcription:

Grammars CS434 Lecture 5 Spring 2005 Department of Computer Science University of Alabama Joel Jones Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 42 at Rice University have explicit permission to make copies of these materials for their personal use.

Left Recursion versus Right Recursion Right recursion Required for termination in top-down parsers Uses (on average) more stack space Produces right-associative operators Left recursion Works fine in bottom-up parsers Limits required stack space Produces left-associative operators Rule of thumb Left recursion for bottom-up parsers Right recursion for top-down parsers * * w * x z y w * ( x * ( y * z ) ) w * * * z y x ( (w * x ) * y ) * z

Associativity What difference does it make? Can change answers in floating-point arithmetic Exposes a different set of common subexpressions Consider x+y+z + x + + + + z x y z y z x y Ideal operator Left association Right association What if y+z occurs elsewhere? Or x+y? or x+z? What if x = 2 & z = 7? Neither left nor right exposes 9. Best choice is function of surrounding context

Hierarchy of Context-Free Languages Context-free languages LR(k) LR() Deterministic languages (LR(k)) LL(k) languages Simple precedence languages LL() languages Operator precedence languages The inclusion hierarchy for context-free languages

Hierarchy of Context-Free Grammars Context-free grammars Floyd-Evans Parsable Unambiguous CFGs Operator Precedence LR(k) LR() LALR() SLR() LR(0) LL(k) LL() Operator precedence includes some ambiguous grammars LL() is a subset of SLR() The inclusion hierarchy for context-free grammars

Beyond Syntax There is a level of correctness that is deeper than grammar fie(a,b,c,d) int a, b, c, d; { } fee() { int f[3],g[0], h, i, j, k; char *p; fie(h,i, ab,j, k); k = f * i + j; h = g[7]; printf( <%s,%s>.\n, p,q); p = 0; } What is wrong with this program? (let me count the ways )

Beyond Syntax There is a level of correctness that is deeper than grammar fie(a,b,c,d) int a, b, c, d; { } fee() { int f[3],g[0], h, i, j, k; char *p; fie(h,i, ab,j, k); k = f * i + j; h = g[7]; printf( <%s,%s>.\n, p,q); p = 0; } What is wrong with this program? (let me count the ways ) declared g[0], used g[7] wrong number of args to fie() ab is not an int wrong dimension on use of f undeclared variable q 0 is not a character string All of these are deeper than syntax To generate code, we need to understand its meaning!

Beyond Syntax To generate code, the compiler needs to answer many questions Is x a scalar, an array, or a function? Is x declared? Are there names that are not declared? Declared but not used? Which declaration of x does each use reference? Is the expression x * y + z type-consistent? In a[i,j,k], does a have three dimensions? Where can z be stored? (register, local, global, heap, static) In f 5, how should 5 be represented? How many arguments does fie() take? What about printf ()? Does *p reference the result of a malloc()? Do p & q refer to the same memory location? Is x defined before it is used? These cannot be expressed in a CFG

Beyond Syntax These questions are part of context-sensitive analysis Answers depend on values, not parts of speech Questions & answers involve non-local information Answers may involve computation How can we answer these questions? Use formal methods Context-sensitive grammars? Attribute grammars? (attributed grammars?) Use ad-hoc techniques Symbol tables Ad-hoc code (action routines) In scanning & parsing, formalism won; different story here.

Beyond Syntax Telling the story The attribute grammar formalism is important Succinctly makes many points clear Sets the stage for actual, ad-hoc practice The problems with attribute grammars motivate practice Non-local computation Need for centralized information Some folks in the community still argue for attribute grammars Knowledge is power Information is immunization We will cover attribute grammars, then move on to ad-hoc ideas

Attribute Grammars What is an attribute grammar? A context-free grammar augmented with a set of rules Each symbol in the derivation has a set of values, or attributes The rules specify how to compute a value for each attribute Example grammar This grammar describes signed binary numbers We would like to augment it with rules that compute the decimal value of each valid input string

Examples For Number Sign Number Sign For 0 Number Sign Sign Sign Sign Sign Sign 0 Sign 0 0 Number Sign 0 We will use these two throughout the lecture

Attribute Grammars Add rules to compute the decimal value of a signed binary number Productions Attribution Rules Number Sign.pos 0 If Sign.neg then Number.val.val else Number.val.val Sign + Sign.neg false Sign.neg true 0.pos 0.pos +.pos 0.pos 0.val.val +.val.pos.pos.val.val 0.val 0.val 2.pos

Back to the Examples Rules + parse tree imply an attribute dependence graph For neg true Sign Number.val.val Number.pos 0.val.val.pos 0.val 2.pos One possible evaluation order:.pos 2 Sign.neg 3.pos 4.val 5.val 6 Number.val Other orders are possible Knuth suggested a data-flow model for evaluation Independent attributes first Others in order as input values become available Evaluation order must be consistent with the attribute dependence graph

Back to the Examples Sign Number neg: true pos: 2 pos: 2 val: 5 pos: 0 pos: val: 0 pos: 0 val: 5 pos: 0 val: For 0 This is the complete attribute dependence graph for 0. It shows the flow of all attribute values in the example. Some flow downward inherited attributes Some flow upward synthesized attributes A rule may use attributes in the parent, children, or siblings of a node

The Rules of the Game Attributes associated with nodes in parse tree Rules are value assignments associated with productions Attribute is defined once, using local information Label identical terms in production for uniqueness Rules & parse tree define an attribute dependence graph Graph must be non-circular This produces a high-level, functional specification Synthesized attribute Depends on values from children Inherited attribute Depends on values from siblings & parent

Using Attribute Grammars Attribute grammars can specify context-sensitive actions Take values from syntax Perform computations with values Insert tests, logic, Synthesized Attributes Use values from children & from constants S-attributed grammars Evaluate in a single bottom-up pass Good match to LR parsing Inherited Attributes Use values from parent, constants, & siblings directly express context can rewrite to avoid them Thought to be more natural Not easily done at parse time We want to use both kinds of attribute

Evaluation Methods Dynamic, dependence-based methods Build the parse tree Build the dependence graph Topological sort the dependence graph Define attributes in topological order Rule-based methods (treewalk) Analyze rules at compiler-generation time Determine a fixed (static) ordering Evaluate nodes in that order Oblivious methods (passes, dataflow) Ignore rules & parse tree Pick a convenient order (at design time) & use it

Back to the Example Number Sign 0 For 0

Back to the Example Number val: Sign neg: pos: 0 val: pos: val: pos: val: pos: val: pos: val: pos: val: 0 For 0

Back to the Example Number val: 5 Inherited Attributes Sign neg: true pos: 0 val: 5 pos: pos: 0 val: pos: 2 pos: val: 0 pos: 2 0 For 0

Back to the Example Number val: 5 Synthesized attributes Sign neg: true pos: 0 val: 5 pos: pos: 0 val: pos: 2 pos: val: 0 pos: 2 0 For 0

Back to the Example Number val: 5 Synthesized attributes Sign neg: true pos: 0 val: 5 pos: pos: 0 val: pos: 2 pos: val: 0 pos: 2 0 For 0

Back to the Example Number val: 5 If we show the computation... Sign neg: true pos: 0 val: 5 pos: pos: 0 val: & then peel away the parse tree... pos: 2 pos: val: 0 pos: 2 0 For 0

Back to the Example val: 5 All that is left is the attribute dependence graph. neg: true pos: 0 val: 5 This succinctly represents the flow of values in the problem instance. pos: 2 pos: 2 pos: 0 pos: val: 0 pos: 0 val: For 0 The dynamic methods sort this graph to find independent values, then work along graph edges. The rule-based methods try to discover good orders by analyzing the rules. The oblivious methods ignore the structure of this graph. The dependence graph must be acyclic

Circularity We can only evaluate acyclic instances We can prove that some grammars can only generate instances with acyclic dependence graphs Largest such class is strongly non-circular grammars (SNC ) SNC grammars can be tested in polynomial time Failing the SNC test is not conclusive Many evaluation methods discover circularity dynamically Bad property for a compiler to have SNC grammars were first defined by Kennedy & Warren

A Circular Attribute Grammar Productions Attribution Rules Number.a 0 0.a 0.a + 0.b.b.c.b +.val 0.b 0.a + 0.c +.val 0.val 0.val 2.pos

An Extended Example Grammar for a basic block ( 4.3.3) Let s estimate cycle counts Each operation has a COST Add them, bottom up Assume a load per value Assume no reuse Simple problem for an AG Hey, this looks useful!

An Extended Example (continued) Adding attribution rules Block 0 Block Assign Block 0.cost Block.cost + Assign.cost Assign Block 0.cost Assign.cost Assign Ident = Expr ; Assign.cost COST(store) + Expr.cost Expr 0 Expr + Term Expr 0.cost Expr.cost + COST(add) + Term.cost Expr Term Expr 0.cost Expr.cost + COST(add) + Term.cost Term Expr 0.cost Term.cost Term 0 Term * Factor Term 0.cost Term.cost + COST(mult ) + Factor.cost Term / Factor Term 0.cost Term.cost + COST(div) +Factor.cost Factor Term 0.cost Factor.cost Factor ( Expr ) Factor.cost Expr.cost Number Factor.cost COST(loadI) Identifier Factor.cost COST(load) All these attributes are synthesized!

An Extended Example Properties of the example grammar All attributes are synthesized Þ S-attributed grammar Rules can be evaluated bottom-up in a single pass Good fit to bottom-up, shift/reduce parser Easily understood solution Seems to fit the problem well What about an improvement? Values are loaded only once per block (not at each use) Need to track which values have been already loaded

A Better Execution Model Adding load tracking Need sets Before and After for each production Must be initialized, updated, and passed around the tree Factor ( Expr ) Factor.cost Expr.cost ; Expr.Before Factor.Before ; Factor.After Expr.After Number Factor.cost COST(loadi) ; Factor.After Factor.Before Identifier If (Identifier.name Factor.Before) then Factor.cost COST(load); Factor.After Factor.Before Identifier.name else Factor.cost 0 Factor.After Factor.Before This looks more complex!

A Better Execution Model Load tracking adds complexity But, most of it is in the copy rules Every production needs rules to copy Before & After A sample production Expr 0 Expr + Term Expr 0.cost Expr.cost + COST(add) + Term.cost ; Expr.Before Expr 0.Before ; Term.Before Expr.After; Expr 0.After Term.After These copy rules multiply rapidly Each creates an instance of the set Lots of work, lots of space, lots of rules to write

An Even Better Model What about accounting for finite register sets? Before & After must be of limited size Adds complexity to Factor Identifier Requires more complex initialization Jump from tracking loads to tracking registers is small Copy rules are already in place Some local code to perform the allocation Next class Curing these problems with ad-hoc syntax-directed translation