CS 406/534 Compiler Construction LR(1) Parsing and CSA

Similar documents
Grammars. CS434 Lecture 15 Spring 2005 Department of Computer Science University of Alabama Joel Jones

Context-sensitive Analysis

Context-sensitive Analysis. Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved.

There is a level of correctness that is deeper than grammar. There is a level of correctness that is deeper than grammar

Syntactic Directed Translation

Context-sensitive Analysis

Context-sensitive Analysis Part II

Parsing Wrapup. Roadmap (Where are we?) Last lecture Shift-reduce parser LR(1) parsing. This lecture LR(1) parsing

Syntactic Directed Translation

Syntax Analysis, VII One more LR(1) example, plus some more stuff. Comp 412 COMP 412 FALL Chapter 3 in EaC2e. target code.

CS415 Compilers. LR Parsing & Error Recovery

CS415 Compilers Context-Sensitive Analysis Type checking Symbol tables

Bottom-up Parser. Jungsik Choi

CS 406/534 Compiler Construction Parsing Part I

CS 406/534 Compiler Construction Putting It All Together

Syntax-Directed Translation. CS Compiler Design. SDD and SDT scheme. Example: SDD vs SDT scheme infix to postfix trans

Parsing III. CS434 Lecture 8 Spring 2005 Department of Computer Science University of Alabama Joel Jones

Context-sensitive Analysis Part II Chapter 4 (up to Section 4.3)

Lecture 7: Deterministic Bottom-Up Parsing

CS 406/534 Compiler Construction Parsing Part II LL(1) and LR(1) Parsing

Computing Inside The Parser Syntax-Directed Translation. Comp 412 COMP 412 FALL Chapter 4 in EaC2e. source code. IR IR target.

Lecture 8: Deterministic Bottom-Up Parsing

Syntax Directed Translation

S Y N T A X A N A L Y S I S LR

Bottom-up parsing. Bottom-Up Parsing. Recall. Goal: For a grammar G, withstartsymbols, any string α such that S α is called a sentential form

A left-sentential form is a sentential form that occurs in the leftmost derivation of some sentence.

Syntax-directed translation. Context-sensitive analysis. What context-sensitive questions might the compiler ask?

Context-sensitive Analysis Part IV Ad-hoc syntax-directed translation, Symbol Tables, andtypes

Syntax Analysis, III Comp 412

Building a Parser Part III

Parsing II Top-down parsing. Comp 412

Computing Inside The Parser Syntax-Directed Translation. Comp 412

Syntax Analysis, III Comp 412

EDAN65: Compilers, Lecture 06 A LR parsing. Görel Hedin Revised:

Introduction to Parsing. Comp 412

Intermediate Representations

COMP 181. Prelude. Prelude. Summary of parsing. A Hierarchy of Grammar Classes. More power? Syntax-directed translation. Analysis

LR Parsing Techniques

CS415 Compilers. Syntax Analysis. These slides are based on slides copyrighted by Keith Cooper, Ken Kennedy & Linda Torczon at Rice University

Syntax Analysis, V Bottom-up Parsing & The Magic of Handles Comp 412

Context-sensitive analysis. Semantic Processing. Alternatives for semantic processing. Context-sensitive analysis

Context-free grammars

CSCI312 Principles of Programming Languages

Compiler Construction: Parsing

Bottom-Up Parsing. Lecture 11-12

Bottom-Up Parsing. Lecture 11-12

CS 2210 Sample Midterm. 1. Determine if each of the following claims is true (T) or false (F).

LR Parsing LALR Parser Generators

Syntax Analysis, VI Examples from LR Parsing. Comp 412

MIT Parse Table Construction. Martin Rinard Laboratory for Computer Science Massachusetts Institute of Technology

Let us construct the LR(1) items for the grammar given below to construct the LALR parsing table.

4. Lexical and Syntax Analysis

LR Parsing LALR Parser Generators

Wednesday, September 9, 15. Parsers

Parsers. What is a parser. Languages. Agenda. Terminology. Languages. A parser has two jobs:

4. Lexical and Syntax Analysis

Parsing Part II. (Ambiguity, Top-down parsing, Left-recursion Removal)

Compiler Design 1. Bottom-UP Parsing. Goutam Biswas. Lect 6

Parsing. Note by Baris Aktemur: Our slides are adapted from Cooper and Torczon s slides that they prepared for COMP 412 at Rice.

Syntax Analysis. Amitabha Sanyal. ( as) Department of Computer Science and Engineering, Indian Institute of Technology, Bombay

Parsers. Xiaokang Qiu Purdue University. August 31, 2018 ECE 468

Chapter 4. Lexical and Syntax Analysis

Parsing III. (Top-down parsing: recursive descent & LL(1) )

Section A. A grammar that produces more than one parse tree for some sentences is said to be ambiguous.

Formal Languages and Compilers Lecture VII Part 4: Syntactic A

Wednesday, August 31, Parsers

Syntax Analysis: Context-free Grammars, Pushdown Automata and Parsing Part - 4. Y.N. Srikant

Syntax-Directed Translation

Lecture 14: Parser Conflicts, Using Ambiguity, Error Recovery. Last modified: Mon Feb 23 10:05: CS164: Lecture #14 1

Bottom Up Parsing. Shift and Reduce. Sentential Form. Handle. Parse Tree. Bottom Up Parsing 9/26/2012. Also known as Shift-Reduce parsing

Compiler Construction 2016/2017 Syntax Analysis

Lexical and Syntax Analysis. Bottom-Up Parsing

Monday, September 13, Parsers

Lecture Bottom-Up Parsing

Parsing. Roadmap. > Context-free grammars > Derivations and precedence > Top-down parsing > Left-recursion > Look-ahead > Table-driven parsing

Principle of Compilers Lecture IV Part 4: Syntactic Analysis. Alessandro Artale

Computing Inside The Parser Syntax-Directed Translation, II. Comp 412

Bottom-Up Parsing II (Different types of Shift-Reduce Conflicts) Lecture 10. Prof. Aiken (Modified by Professor Vijay Ganesh.

3. Syntax Analysis. Andrea Polini. Formal Languages and Compilers Master in Computer Science University of Camerino

Action Table for CSX-Lite. LALR Parser Driver. Example of LALR(1) Parsing. GoTo Table for CSX-Lite

UNIT III & IV. Bottom up parsing

Parsing Part II (Top-down parsing, left-recursion removal)

CSE P 501 Compilers. LR Parsing Hal Perkins Spring UW CSE P 501 Spring 2018 D-1

1. (a) What are the closure properties of Regular sets? Explain. (b) Briefly explain the logical phases of a compiler model. [8+8]

In One Slide. Outline. LR Parsing. Table Construction

Conflicts in LR Parsing and More LR Parsing Types

CS5363 Final Review. cs5363 1

Syntactic Analysis. Top-Down Parsing

Formal Languages and Compilers Lecture VII Part 3: Syntactic A

3. Parsing. Oscar Nierstrasz

G53CMP: Lecture 4. Syntactic Analysis: Parser Generators. Henrik Nilsson. University of Nottingham, UK. G53CMP: Lecture 4 p.1/32

How do LL(1) Parsers Build Syntax Trees?

CS /534 Compiler Construction University of Massachusetts Lowell

CS415 Compilers. Lexical Analysis

CS606- compiler instruction Solved MCQS From Midterm Papers

CSE P 501 Compilers. Parsing & Context-Free Grammars Hal Perkins Winter /15/ Hal Perkins & UW CSE C-1

Syntax-Directed Translation

Parsing. Handle, viable prefix, items, closures, goto s LR(k): SLR(1), LR(1), LALR(1)

shift-reduce parsing

COMPILER CONSTRUCTION LAB 2 THE SYMBOL TABLE. Tutorial 2 LABS. PHASES OF A COMPILER Source Program. Lab 2 Symbol table

Transcription:

CS 406/534 Compiler Construction LR(1) Parsing and CSA Prof. Li Xu Dept. of Computer Science UMass Lowell Fall 2004 Part of the course lecture notes are based on Prof. Keith Cooper, Prof. Ken Kennedy and Dr. Linda Torczon s teaching materials at Rice University. All rights reserved. 1

Administravia HW2 posted on the web Start grading lab1 and hw1 Turn in midterm today Waiting for more web feedback CS406/534 Fall 2004, Prof. Li Xu 2 2

What We Did Last Time Table-driven LL(1) Parsing LR(1) Shift-Reduce Parsing Lab 2 Mechanics CS406/534 Fall 2004, Prof. Li Xu 3 3

Today s Goals More examples of LR(1) Parsing Context-sensitive analysis Attribute grammar Ad-hoc techniques CS406/534 Fall 2004, Prof. Li Xu 4 4

LR(1) Parsing Bottom-up Shift-Reduce Parsing Build reverse rightmost derivation The key is to find handle All active handles include top of stack (TOS) Shift inputs until TOS is right end of a handle Language of handles is regular (finite) Build a handle-recognizing DFA ACTION & GOTO tables encode the DFA CS406/534 Fall 2004, Prof. Li Xu 5 5

LR(1) Parsing push INVALID push start state S0 Word NextWord() while (true) s top of stack if Action [s,word] = "shift si" then push word push si word NextWord() else if Action[s, word] = "reduce A β" then pop 2 x β symbols s top of stack push A push Goto[s, A] else if Action[s, word] = "accept" then return success else return syntax error CS406/534 Fall 2004, Prof. Li Xu 6 6

LR(1) Table Construction High-level overview 1 Build the canonical collection of sets of LR(1) Items a Begin in an appropriate state, s 0 [S S, EOF], along with any equivalent items Derive equivalent items as closure( s 0 ) b Repeatedly compute, for each s k, and each X, goto(s k,x) If the set is not already in the collection, add it Record all the transitions created by goto( ) This eventually reaches a fixed point 2 Fill in the table from the collection of sets of LR(1) items The canonical collection completely encodes the transition diagram for the handle-finding DFA CS406/534 Fall 2004, Prof. Li Xu 7 7

Example Simplified, right recursive expression grammar Goal Expr Expr Term Expr Expr Term Term Factor * Term Term Factor Factor ident Closure( s ) while ( s is still changing ) items [A β Bδ,a] s productions B τ P b FIRST(δa) // δ might be ε if [B τ,b] s then add [B τ,b] to s Symbol FIRST Goal { ident } Expr { ident } Term { ident } Factor { ident } { } * { * } ident { ident } CS406/534 Fall 2004, Prof. Li Xu 8 8

Initialization Step LR(1) Items s 0 closure( {[Goal Expr, EOF] } ) { [Goal Expr, EOF], [Expr Term Expr, EOF], [Expr Term, EOF], [Term Factor* Term, EOF], [Term Factor, EOF], [Factor ident, EOF], [Term Factor, ], [Term Factor * Term, ], [Factor ident, ], [Factor ident, *] } S {s 0 } CS406/534 Fall 2004, Prof. Li Xu 9 9

Iteration 1 s 1 s 2 goto(s 0, Expr) goto(s 0, Term) s 3 goto(s 0, Factor) s 4 goto(s 0, ident ) Iteration 2 s 5 goto(s 2, ) s 6 goto(s 3, * ) Iteration 3 s 7 goto(s 5, Expr ) s 8 goto(s 6, Term ) LR(1) Items CS406/534 Fall 2004, Prof. Li Xu 10 10

LR(1) Items S 0 : { [Goal Expr, EOF], [Expr Term Expr, EOF], [Expr Term, EOF], [Term Factor * Term, EOF], [Term Factor * Term, ], [Term Factor, EOF], [Term Factor, ], [Factor ident, EOF], [Factor ident, ], [Factor ident, *] } S 1 : { [Goal Expr, EOF] } S 2 : { [Expr Term Expr, EOF], [Expr Term, EOF] } S 3 : { [Term Factor * Term, EOF],[Term Factor * Term, ], [Term Factor, EOF], [Term Factor, ] } S 4 : { [Factor ident, EOF],[Factor ident, ], [Factor ident, *] } S 5 : { [Expr Term Expr, EOF], [Expr Term Expr, EOF], [Expr Term, EOF], [Term Factor * Term, ], [Term Factor, ], [Term Factor * Term, EOF], [Term Factor, EOF], [Factor ident, *], [Factor ident, ], [Factor ident, EOF] } CS406/534 Fall 2004, Prof. Li Xu 11 11

LR(1) Items S 6 : { [Term Factor * Term, EOF], [Term Factor * Term, ], [Term Factor * Term, EOF], [Term Factor * Term, ], [Term Factor, EOF], [Term Factor, ], [Factor ident, EOF], [Factor ident, ], [Factor ident, *] } S 7 : { [Expr Term Expr, EOF] } S 8 : { [Term Factor * Term, EOF], [Term Factor * Term, ] } CS406/534 Fall 2004, Prof. Li Xu 12 12

Goto Relations The Goto Relationship (from the construction) State Expr Term Factor - * Ident 0 1 2 3 4 1 2 5 3 6 4 5 7 2 3 4 6 8 3 4 7 8 CS406/534 Fall 2004, Prof. Li Xu 13 13

ACTION and GOTO Tables The algorithm set s x S item i s x if i is [A β ad,b] and goto(s x,a) = s k, a T then ACTION[x,a] shift k else if i is [A β,a] then ACTION[x,a] reduce A β else if i is [S S,EOF] then ACTION[x,a] accept n NT if goto(s x,n) = s k then GOTO[x,n] k x is the state number CS406/534 Fall 2004, Prof. Li Xu 14 14

ACTION and GOTO Tables The algorithm produces the following table ACTION GOTO Ident - * EOF Expr Term Factor 0 s 4 1 2 3 1 acc 2 s 5 r 3 3 r 5 s 6 r 5 4 r 6 r 6 r 6 5 s 4 7 2 3 6 s 4 8 3 7 r 2 8 r 4 r 4 Plugs into the skeleton LR(1) parser CS406/534 Fall 2004, Prof. Li Xu 15 15

Table Conflict What if set s contains [A β aγ,b] and [B β,a]? First item generates shift, second generates reduce Both define ACTION[s,a] cannot do both actions This is a fundamental ambiguity, called a shift/reduce error Modify the grammar to eliminate it (if-then-else) Shifting will often resolve it correctly What if set s contains [A γ, a] and [B γ, a]? Each generates reduce, but with a different production Both define ACTION[s,a] cannot do both reductions This fundamental ambiguity is called a reduce/reduce error Modify the grammar to eliminate it (PL/I s overloading of (...)) In either case, the grammar is not LR(1) CS406/534 Fall 2004, Prof. Li Xu 16 16

Three options: Shrinking The Tables Combine terminals such as number & identifier, + & -, * & / Directly removes a column, may remove a row For expression grammar, 198 (vs. 384) table entries Combine rows or columns Implement identical rows once & remap states Requires extra indirection on each lookup Use separate mapping for ACTION & for GOTO Use another construction algorithm Both LALR(1) and SLR(1) produce smaller tables Implementations are readily available CS406/534 Fall 2004, Prof. Li Xu 17 17

LR(k) versus LL(k) Finding Reductions LR(k) Each reduction in the parse is detectable with 1 the complete left context, 2 the reducible phrase, itself, and 3 the k terminal symbols to its right LL(k) Parser must select the reduction based on 1 The complete left context 2 The next k terminals Thus, LR(k) examines more context in practice, programming languages do not actually seem to fall in the gap between LL(1) languages and deterministic languages J.J. Horning, LR Grammars and Analysers, in Compiler Construction, An Advanced Course, Springer-Verlag, 1976 CS406/534 Fall 2004, Prof. Li Xu 18 18

Summary Top-down recursive descent Advantages Fast Good locality Simplicity Good error detection Disadvantages Hand-coded High maintenance Right associativity LR(1) Fast Deterministic langs. Automatable Left associativity Large working sets Poor error messages Large table sizes CS406/534 Fall 2004, Prof. Li Xu 19 19

Left versus Right Recursion Right recursion Required for termination in top-down parsers Produces right-associative operators * Left recursion Works fine in bottom-up parsers Produces left-associative operators Rule of thumb Left recursion for bottom-up parsers Right recursion for top-down parsers * w * x z y w * ( x * ( y * z ) ) w * * * z y x ( (w * x ) * y ) * z CS406/534 Fall 2004, Prof. Li Xu 20 20

Associativity What difference does it make? Can change answers in floating-point arithmetic Exposes a different set of common subexpressions Consider x+y+z + x + + + + z x y z y z x y Ideal operator Left association Right association What if y+z occurs elsewhere? Or x+y? or x+z? What if x = 2 & z = 17? Neither left nor right exposes 19. Best choice is function of surrounding context CS406/534 Fall 2004, Prof. Li Xu 21 21

Context-Free Languages Context-free languages LR(k) LR(1) Deterministic languages (LR(k)) LL(k) languages Simple precedence languages LL(1) languages Operator precedence languages The inclusion hierarchy for context-free languages CS406/534 Fall 2004, Prof. Li Xu 22 22

Context-Free Grammars Context-free grammars Floyd-Evans Parsable Unambiguous CFGs LR(k) Operator Precedence LR(1) LALR(1) SLR(1) LR(0) LL(k) LL(1) Operator precedence includes some ambiguous grammars LL(1) is a subset of SLR(1) The inclusion hierarchy for context-free grammars CS406/534 Fall 2004, Prof. Li Xu 23 23

Beyond Syntax There is a level of correctness that is deeper than grammar fie(a,b,c,d) int a, b, c, d; { } fee() { int f[3],g[0], h, i, j, k; char *p; fie(h,i, ab,j, k); k = f * i + j; h = g[17]; printf( <%s,%s>.\n, p,q); p = 10; } What is wrong with this program? (let me count the ways ) CS406/534 Fall 2004, Prof. Li Xu 24 24

Semantics There is a level of correctness that is deeper than grammar fie(a,b,c,d) int a, b, c, d; { } fee() { int f[3],g[0], h, i, j, k; char *p; fie(h,i, ab,j, k); k = f * i + j; h = g[17]; printf( <%s,%s>.\n, p,q); p = 10; } What is wrong with this program? (let me count the ways ) declared g[0], used g[17] wrong number of args to fie() ab is not an int wrong dimension on use of f undeclared variable q 10 is not a character string All of these are deeper than syntax CS406/534 Fall 2004, Prof. Li Xu 25 25

Beyond Syntax To generate code, the compiler needs to answer many questions Is x a scalar, an array, or a function? Is x declared? Are there names that are not declared? Declared but not used? Which declaration of x does each use reference? Is the expression x * y + z type-consistent? In a[i,j,k], does a have three dimensions? Where can z be stored? (register, local, global, heap, static) In f 15, how should 15 be represented? How many arguments does fie() take? What about printf ()? Does *p reference the result of a malloc()? Do p & q refer to the same memory location? Is x defined before it is used? CS406/534 Fall 2004, Prof. Li Xu 26 26

Beyond Syntax These questions are part of context-sensitive analysis Answers depend on values, not parts of speech Questions & answers involve non-local information Answers may involve computation How can we answer these questions? Use formal methods Context-sensitive grammars? Attribute grammars? (attributed grammars?) Use ad-hoc techniques Symbol tables Ad-hoc code (action routines) In scanning & parsing, formalism won; different story here. CS406/534 Fall 2004, Prof. Li Xu 27 27

Telling the story Beyond Syntax The attribute grammar formalism is important Succinctly makes many points clear Sets the stage for actual, ad-hoc practice The problems with attribute grammars motivate practice Non-local computation Need for centralized information Some folks in the community still argue for attribute grammars We will cover attribute grammars, then move on to ad-hoc ideas CS406/534 Fall 2004, Prof. Li Xu 28 28

Attribute Grammars What is an attribute grammar? A context-free grammar augmented with a set of rules Each symbol in the derivation has a set of values, or attributes The rules specify how to compute a value for each attribute Example grammar Number Sign List Sign + List List Bit Bit Bit 0 1 This grammar describes signed binary numbers We would like to augment it with rules that compute the decimal value of each valid input string CS406/534 Fall 2004, Prof. Li Xu 29 29

For 1 Number Sign List List Bit 1 Number Sign List Examples For 101 Number Sign List Sign List Bit Sign List 1 Sign List Bit 1 Sign List 1 1 Sign Bit 0 1 Sign 1 0 1 101 Number Sign List List Bit List Bit 1 Bit Bit 0 1 1 We will use these two throughout the lecture CS406/534 Fall 2004, Prof. Li Xu 30 30

Attribute Grammars Add rules to compute the decimal value of a signed binary number Productions Attribution Rules Number Sign List List.pos 0 If Sign.neg then Number.val List.val else Number.val List.val Sign + Sign.neg false Sign.neg true List 0 List 1 Bit List 1.pos List 0.pos + 1 Bit.pos List0.pos List 0.val List 1.val + Bit.val Bit Bit.pos List.pos List.val Bit.val Bit 0 Bit.val 0 1 Bit.val 2 Bit.pos Symbol Number Sign List Bit Attributes val neg pos, val pos, val CS406/534 Fall 2004, Prof. Li Xu 31 31

Back to The Examples Rules + parse tree imply an attribute dependence graph For 1 Number.val List.val 1 Number neg true List.pos 0 Sign List List.val Bit.val 1 Bit Bit.pos 0 Bit.val 2 Bit.pos 1 1 One possible evaluation order: 1 List.pos 2 Sign.neg 3 Bit.pos 4 Bit.val 5 List.val 6 Number.val Other orders are possible Knuth suggested a data-flow model for evaluation Independent attributes first Others in order as input values become available Evaluation order must be consistent with the attribute dependence graph CS406/534 Fall 2004, Prof. Li Xu 32 32

Back to the Examples Number Sign neg: true List Bit 1 pos: 2 val: 4 pos: 2 val: 4 val: 5 List pos: 1 val: 4 Bit 0 List pos: 1 val: 0 pos: 0 val: 5 Bit 1 pos: 0 val: 1 This is the complete attribute dependence graph for 101. It shows the flow of all attribute values in the example. Some flow downward inherited attributes Some flow upward synthesized attributes A rule may use attributes in the parent, children, or siblings of a node CS406/534 Fall 2004, Prof. Li Xu 33 33

The Rules of The Game Attributes associated with nodes in parse tree Rules are value assignments associated with productions Attribute is defined once, using local information Label identical terms in production for uniqueness Rules & parse tree define an attribute dependence graph Graph must be non-circular This produces a high-level, functional specification Synthesized attribute Depends on values from children Inherited attribute Depends on values from siblings & parent CS406/534 Fall 2004, Prof. Li Xu 34 34

Using Attribute Grammars Attribute grammars can specify context-sensitive actions Take values from syntax Perform computations with values Insert tests, logic, Synthesized Attributes Use values from children & from constants S-attributed grammars Evaluate in a single bottom-up pass Good match to LR parsing Inherited Attributes Use values from parent, constants, & siblings directly express context can rewrite to avoid them Thought to be more natural Not easily done at parse time We want to use both kinds of attribute CS406/534 Fall 2004, Prof. Li Xu 35 35

Evaluation Methods Dynamic, dependence-based methods Build the parse tree Build the dependence graph Topological sort the dependence graph Define attributes in topological order Rule-based methods (treewalk) Analyze rules at compiler-generation time Determine a fixed (static) ordering Evaluate nodes in that order Oblivious methods (passes, dataflow) Ignore rules & parse tree Pick a convenient order (at design time) & use it CS406/534 Fall 2004, Prof. Li Xu 36 36

Back to the Example Number Sign List List Bit List Bit 1 Bit 0 1 For 101 CS406/534 Fall 2004, Prof. Li Xu 37 37

Back to the Example Number val: Sign neg: List pos: 0 val: List pos: val: Bit pos: val: List pos: val: Bit pos: val: 1 Bit pos: val: 0 1 For 101 CS406/534 Fall 2004, Prof. Li Xu 38 38

Back to the Example Number val: 5 Inherited Attributes Sign neg: true List pos: 0 val: 5 List pos: 1 val: 4 Bit pos: 0 val: 1 List pos: 2 val: 4 Bit pos: 1 val: 0 1 Bit pos: 2 val: 4 0 1 For 101 CS406/534 Fall 2004, Prof. Li Xu 39 39

Back to the Example Number val: 5 Synthesized attributes Sign neg: true List pos: 0 val: 5 List pos: 1 val: 4 Bit pos: 0 val: 1 List pos: 2 val: 4 Bit pos: 1 val: 0 1 Bit pos: 2 val: 4 0 1 For 101 CS406/534 Fall 2004, Prof. Li Xu 40 40

Back to the Example Number val: 5 Synthesized attributes Sign neg: true List pos: 0 val: 5 List pos: 1 val: 4 Bit pos: 0 val: 1 List pos: 2 val: 4 Bit pos: 1 val: 0 1 Bit pos: 2 val: 4 0 1 For 101 CS406/534 Fall 2004, Prof. Li Xu 41 41

Back to the Example Number val: 5 If we show the computation... Sign neg: true List pos: 0 val: 5 List pos: 1 val: 4 Bit pos: 0 val: 1 & then peel away the parse tree... List pos: 2 val: 4 Bit pos: 1 val: 0 1 Bit pos: 2 val: 4 0 1 For 101 CS406/534 Fall 2004, Prof. Li Xu 42 42

Back to the Example val: 5 All that is left is the attribute dependence graph. neg: true pos: 0 val: 5 This succinctly represents the flow of values in the problem instance. pos: 2 val: 4 pos: 2 val: 4 pos: 1 val: 4 0 pos: 1 val: 0 1 pos: 0 val: 1 The dynamic methods sort this graph to find independent values, then work along graph edges. The rule-based methods try to discover good orders by analyzing the rules. 1 For 101 The oblivious methods ignore the structure of this graph. The dependence graph must be acyclic CS406/534 Fall 2004, Prof. Li Xu 43 43

Circularity We can only evaluate acyclic instances We can prove that some grammars can only generate instances with acyclic dependence graphs Largest such class is strongly non-circular grammars (SNC ) SNC grammars can be tested in polynomial time Failing the SNC test is not conclusive Many evaluation methods discover circularity dynamically Bad property for a compiler to have SNC grammars were first defined by Kennedy & Warren CS406/534 Fall 2004, Prof. Li Xu 44 44

A Circular Attribute Grammar Productions Attribution Rules Number List List.a 0 List 0 List 1 Bit List 1.a List 0.a + 1 List 0.b List 1.b List1.c List1.b + Bit.val Bit List 0.b List 0.a + List 0.c + Bit.val Bit 0 Bit.val 0 1 Bit.val 2 Bit.pos CS406/534 Fall 2004, Prof. Li Xu 45 45

An Extended Example Grammar for a basic block ( 4.3.3) Block 0 Block 1 Assign Assign Assign Ident = Expr ; Expr 0 Expr 1 + Term Expr 1 Term Term Term 0 Term 1 * Factor Term 1 / Factor Factor Factor ( Expr ) Number Identifier Let s estimate cycle counts Each operation has a COST Add them, bottom up Assume a load per value Assume no reuse Simple problem for an AG Hey, this looks useful! CS406/534 Fall 2004, Prof. Li Xu 46 46

Adding attribution rules An Extended Example Block 0 Block 1 Ass ign Block 0.cost Block 1.cost + Assign.cost Assign Block 0.cost Assign.cost Assign Ident = Expr ; Assign.cost COST(st ore) + Expr.cost Expr 0 Expr 1 + T erm Expr 0.cost Expr 1.cost + COST(add) + Term.cost Expr 1 Term Expr 0.cost Expr 1.cost + COST(add) + Term.cost Term Expr 0.cost Term.cost Term 0 Term 1 * Factor Term 0.cost Term 1.cost + COST(mult ) + Factor.cost Term 1 / Factor Term 0.cost Term 1.cost + COST(div) +Factor.cost Factor Term 0.cost Factor.cost All these attributes are synthesized! Factor ( Expr ) Factor.cost Expr.cost Number Factor.cost COST(loadI) CS406/534 Fall 2004, Identifier Prof. Li Xu Factor.cost COST(load) 47 47

An Extended Example Properties of the example grammar All attributes are synthesized S-attributed grammar Rules can be evaluated bottom-up in a single pass Good fit to bottom-up, shift/reduce parser Easily understood solution Seems to fit the problem well What about an improvement? Values are loaded only once per block (not at each use) Need to track which values have been already loaded CS406/534 Fall 2004, Prof. Li Xu 48 48

A Better Execution Model Adding load tracking Need sets Before and After for each production Must be initialized, updated, and passed around the tree Factor ( Expr ) Factor.cost Expr.cost ; Expr.Before Factor.Before ; Factor.After Expr.After Number Factor.cost COST(loadi) ; Factor.After Factor.Before Identifier If (Identifier.name Factor.Before) then Factor.cost COST(load); Factor.After Factor.Before Identifier.name else Factor.cost 0 Factor.After Factor.Before This looks more complex! CS406/534 Fall 2004, Prof. Li Xu 49 49

A Better Execution Model Load tracking adds complexity But, most of it is in the copy rules Every production needs rules to copy Before & After A sample production Expr 0 Expr 1 + Term Expr 0.cost Expr 1.cost + COST(a dd) + Term.cost ; Expr 1.Before Expr 0.Before ; Term.Before Expr 1.Afte r; Expr 0.Afte r Term.After These copy rules multiply rapidly Each creates an instance of the set Lots of work, lots of space, lots of rules to write CS406/534 Fall 2004, Prof. Li Xu 50 50

An Even Better Model What about accounting for finite register sets? Before & After must be of limited size Adds complexity to Factor Identifier Requires more complex initialization Jump from tracking loads to tracking registers is small Copy rules are already in place Some local code to perform the allocation CS406/534 Fall 2004, Prof. Li Xu 51 51

Tracking loads The Extended Example Introduced Before and After sets to record loads Added 2 copy rules per production Serialized evaluation into execution order Made the whole attribute grammar large & cumbersome Finite register set Complicated one production (Factor Identifier) Needed a little fancier initialization Changes were quite limited Why is one change hard and the other easy? CS406/534 Fall 2004, Prof. Li Xu 52 52

The Moral of the Story Non-local computation needed lots of supporting rules Complex local computation was relatively easy The Problems Copy rules increase cognitive overhead Copy rules increase space requirements Need copies of attributes Can use pointers, for even more cognitive overhead Result is an attributed tree Must build the parse tree Either search tree for answers or copy them to the root CS406/534 Fall 2004, Prof. Li Xu 53 53

Addressing the Problem Ad-hoc techniques Introduce a central repository for facts Table of names Field in table for loaded/not loaded state Avoids all the copy rules, allocation & storage headaches All inter-assignment attribute flow is through table Clean, efficient implementation Good techniques for implementing the table (hashing, B.4) When its done, information is in the table! Cures most of the problems Unfortunately, this design violates the functional paradigm Do we care? CS406/534 Fall 2004, Prof. Li Xu 54 54

The Realist s Alternative Ad-hoc syntax-directed translation Associate a snippet of code with each production At each reduction, the corresponding snippet runs Allowing arbitrary code provides complete flexibility Includes ability to do tasteless & bad things To make this work Need names for attributes of each symbol on lhs & rhs Typically, one attribute passed through parser + arbitrary code (structures, globals, statics, ) Yacc introduced $$, $1, $2, $n, left to right ANTLR allows defining variables Need an evaluation scheme Should fit into the parsing algorithm CS406/534 Fall 2004, Prof. Li Xu 55 55

Reworking the Example Block 0 Block1 Assign Assign Assign Ident = Expr ; cost cost + COST(store); Expr 0 Expr1 + Term cost cost + COST(add); Expr1 Term cost cost + COST(sub); Term Term 0 Term1 * Factor cost cost + COST(mult); Term1 / Factor cost cost + COST(div); Factor Factor ( Expr ) Number cost cost + COST(loadi); Identifier { i hash(identifier); if (Table[i].loaded = false) then { cost cost + COST(load); Table[i].loaded true; } This looks cleaner & simpler than the AG sol n! One missing detail: initializing cost } CS406/534 Fall 2004, Prof. Li Xu 56 56

Reworking the Example Start Init Block Init ε cost 0; Block 0 Block1 Assign Assign Assign Ident = Expr ; cost cost + COST(store); and so on as in the previous version of the example Before parser can reach Block, it must reduce Init Reduction by Init sets cost to zero This is an example of splitting a production to create a reduction in the middle for the sole purpose of hanging an action routine there! CS406/534 Fall 2004, Prof. Li Xu 57 57

Reworking the Example Block 0 Block1 Assign $$ $1 + $2 ; Assign $$ $1 ; Assign Ident = Expr ; $$ COST(store) + $3; Expr 0 Expr1 + Term $$ $1 + COST(add) + $3; Expr1 Term $$ $1 + COST(sub) + $3; Term $$ $1; Term 0 Term1 * Factor $$ $1 + COST(mult) + $3; Term1 / Factor $$ $1 + COST(div) + $3; Factor $$ $1; Factor ( Expr ) $$ $2; Number $$ COST(loadi); Identifier { i hash(identifier); if (Table[i].loaded = false) then { $$ COST(load); Table[i].loaded true; } else $$ 0 This version passes the values through attributes. It avoids the need for initializing cost } CS406/534 Fall 2004, Prof. Li Xu 58 58

Building an Abstract Syntax Tree Assume constructors for each node Assume stack holds pointers to nodes Assume yacc syntax Goal Expr $$ = $1; Expr Expr + Term $$ = MakeAddNode($1,$3); Expr Term $$ = MakeSubNode($1,$3); Term $$ = $1; Term Term * Factor $$ = MakeMulNode($1,$3); Term / Factor $$ = MakeDivNode($1,$3); Factor $$ = $1; Factor ( Expr ) $$ = $2; number $$ = MakeNumNode(token); id $$ = MakeIdNode(token); CS406/534 Fall 2004, Prof. Li Xu 59 59

Reality Most parsers are based on this ad-hoc style of context-sensitive analysis Advantages Addresses the shortcomings of the AG paradigm Efficient, flexible Disadvantages Must write the code with little assistance Programmer deals directly with the details Annotate action code with grammar rules CS406/534 Fall 2004, Prof. Li Xu 60 60

Typical Uses Building a symbol table Enter declaration information as processed At end of declaration syntax, do some post processing Use table to check errors as parsing progresses Simple error checking/type checking Define before use lookup on reference Dimension, type,... check as encountered Type conformability of expression bottom-up walk Procedure interfaces are harder Build a representation for parameter list & types Create list of sites to check Check offline, or handle the cases for arbitrary orderings CS406/534 Fall 2004, Prof. Li Xu 61 61

Is This Really Ad-hoc? Relationship between practice and attribute grammars Similarities Both rules & actions associated with productions Application order determined by tools, not author (Somewhat) abstract names for symbols Differences Actions applied as a unit; not true for AG rules Anything goes in ad-hoc actions; AG rules are functional AG rules are higher level than ad-hoc actions CS406/534 Fall 2004, Prof. Li Xu 62 62

Making Ad-hoc SDT Work What about a rule that must work in mid-production? Can transform the grammar Split it into two parts at the point where rule must go Apply the rule on reduction to the appropriate part Can also handle reductions on shift actions Add a production to create a reduction Was: fee fum Make it: fee fie fum and tie action to this reduction ANTLR supports the above automatically Together, these let us apply rule at any point in the parse CS406/534 Fall 2004, Prof. Li Xu 63 63

Limitations of Ad-hoc SDT Forced to evaluate in a given order: postorder Left to right only Bottom up only Implications Declarations before uses Context information cannot be passed down How do you know what rule you are called from within? Example: cannot pass bit position from right down Could you use globals? Requires initialization & some re-thinking of the solution Can we rewrite it in a form that is better for the adhoc sol n CS406/534 Fall 2004, Prof. Li Xu 64 64

Alternative Strategy Build an abstract syntax tree Use tree walk routines Use visitor design pattern to add functionality TreeNodeVisitor VisitAssignment(AssignmentNode) VisitVariableRef(VariableRefNode) TypeCheckVisitor AnalysisVisitor VisitAssignment(AssignmentNode) VisitVariableRef(VariableRefNode) VisitAssignment(AssignmentNode) VisitVariableRef(VariableRefNode) CS406/534 Fall 2004, Prof. Li Xu 65 65

Parallel structure of tree: Visitor Treewalk Separates treewalk code from node handling code Facilitates change in processing without change to tree structure TreeNode Accept(NodeVisitor) AssignmentNode VariableRefNode Accept(NodeVisitor v) Accept(NodeVisitor v) v.visitassignment(this) v.visitvariableref(this) CS406/534 Fall 2004, Prof. Li Xu 66 66

Wrap-up of parsing Summary More example to build LR(1) table Attribute Grammars Pros: Formal, powerful, can deal with propagation strategies Cons: Too many copy rules, no global tables, works on parse tree Ad-hoc SDT Annotate production with ad-hoc action code Postorder Code Execution Pros: Simple and functional, can be specified in grammar, does not require parse tree Cons: Rigid evaluation order, no context inheritance Generalized Tree Walk Pros: Full power and generality, operates on abstract syntax tree (using Visitor pattern) Cons: Requires specific code for each tree node type, more complicated Powerful tools like ANTLR can help with this CS406/534 Fall 2004, Prof. Li Xu 67 67

Next Class Intermediate Representations CS406/534 Fall 2004, Prof. Li Xu 68 68