LR Parsing LALR Parser Generators

Similar documents
LR Parsing LALR Parser Generators

Bottom-Up Parsing. Lecture 11-12

In One Slide. Outline. LR Parsing. Table Construction

LR Parsing. Table Construction

Bottom-Up Parsing. Lecture 11-12

Conflicts in LR Parsing and More LR Parsing Types

Lecture 7: Deterministic Bottom-Up Parsing

Lecture 8: Deterministic Bottom-Up Parsing

Compilers. Bottom-up Parsing. (original slides by Sam

More Bottom-Up Parsing

Review: Shift-Reduce Parsing. Bottom-up parsing uses two actions: Bottom-Up Parsing II. Shift ABC xyz ABCx yz. Lecture 8. Reduce Cbxy ijk CbA ijk

Lexical and Syntax Analysis. Bottom-Up Parsing

Lecture 14: Parser Conflicts, Using Ambiguity, Error Recovery. Last modified: Mon Feb 23 10:05: CS164: Lecture #14 1

Bottom-Up Parsing II (Different types of Shift-Reduce Conflicts) Lecture 10. Prof. Aiken (Modified by Professor Vijay Ganesh.

MIT Parse Table Construction. Martin Rinard Laboratory for Computer Science Massachusetts Institute of Technology

Formal Languages and Compilers Lecture VII Part 4: Syntactic A

Bottom-Up Parsing II. Lecture 8

Monday, September 13, Parsers

CS 4120 Introduction to Compilers

Wednesday, August 31, Parsers

LALR stands for look ahead left right. It is a technique for deciding when reductions have to be made in shift/reduce parsing. Often, it can make the

Context-free grammars

S Y N T A X A N A L Y S I S LR

Compiler Construction: Parsing

Wednesday, September 9, 15. Parsers

Parsers. What is a parser. Languages. Agenda. Terminology. Languages. A parser has two jobs:

Parsers. Xiaokang Qiu Purdue University. August 31, 2018 ECE 468

Parsing Wrapup. Roadmap (Where are we?) Last lecture Shift-reduce parser LR(1) parsing. This lecture LR(1) parsing

shift-reduce parsing

Lecture Bottom-Up Parsing

3. Syntax Analysis. Andrea Polini. Formal Languages and Compilers Master in Computer Science University of Camerino

Principles of Programming Languages

EDAN65: Compilers, Lecture 06 A LR parsing. Görel Hedin Revised:

Review of CFGs and Parsing II Bottom-up Parsers. Lecture 5. Review slides 1

Principle of Compilers Lecture IV Part 4: Syntactic Analysis. Alessandro Artale

Table-driven using an explicit stack (no recursion!). Stack can be viewed as containing both terminals and non-terminals.

Syntax-Directed Translation

LR(0) Parsing Summary. LR(0) Parsing Table. LR(0) Limitations. A Non-LR(0) Grammar. LR(0) Parsing Table CS412/CS413

LR Parsing Techniques

Lecture Notes on Bottom-Up LR Parsing

Introduction to Parsing. Lecture 5

Formal Languages and Compilers Lecture VII Part 3: Syntactic A

Let us construct the LR(1) items for the grammar given below to construct the LALR parsing table.

LR Parsing. Leftmost and Rightmost Derivations. Compiler Design CSE 504. Derivations for id + id: T id = id+id. 1 Shift-Reduce Parsing.

Bottom-Up Parsing LR Parsing

Bottom-up parsing. Bottom-Up Parsing. Recall. Goal: For a grammar G, withstartsymbols, any string α such that S α is called a sentential form

Example CFG. Lectures 16 & 17 Bottom-Up Parsing. LL(1) Predictor Table Review. Stacks in LR Parsing 1. Sʹ " S. 2. S " AyB. 3. A " ab. 4.

Parser Generation. Bottom-Up Parsing. Constructing LR Parser. LR Parsing. Construct parse tree bottom-up --- from leaves to the root

Lecture Notes on Bottom-Up LR Parsing

Simple LR (SLR) LR(0) Drawbacks LR(1) SLR Parse. LR(1) Start State and Reduce. LR(1) Items 10/3/2012

Introduction to Parsing. Lecture 5

CS 2210 Sample Midterm. 1. Determine if each of the following claims is true (T) or false (F).

Bottom Up Parsing. Shift and Reduce. Sentential Form. Handle. Parse Tree. Bottom Up Parsing 9/26/2012. Also known as Shift-Reduce parsing

Bottom up parsing. The sentential forms happen to be a right most derivation in the reverse order. S a A B e a A d e. a A d e a A B e S.

CSE P 501 Compilers. LR Parsing Hal Perkins Spring UW CSE P 501 Spring 2018 D-1

A left-sentential form is a sentential form that occurs in the leftmost derivation of some sentence.

Compilation 2013 Parser Generators, Conflict Management, and ML-Yacc

Introduction to Parsing. Lecture 5. Professor Alex Aiken Lecture #5 (Modified by Professor Vijay Ganesh)

Concepts Introduced in Chapter 4

Introduction to Parsing Ambiguity and Syntax Errors

Outline. Regular languages revisited. Introduction to Parsing. Parser overview. Context-free grammars (CFG s) Lecture 5. Derivations.

PART 3 - SYNTAX ANALYSIS. F. Wotawa TU Graz) Compiler Construction Summer term / 309

Section A. A grammar that produces more than one parse tree for some sentences is said to be ambiguous.

Compiler Construction 2016/2017 Syntax Analysis

Introduction to Parsing Ambiguity and Syntax Errors

How do LL(1) Parsers Build Syntax Trees?

UNIT III & IV. Bottom up parsing

Parsing. Handle, viable prefix, items, closures, goto s LR(k): SLR(1), LR(1), LALR(1)

Compilation 2012 Context-Free Languages Parsers and Scanners. Jan Midtgaard Michael I. Schwartzbach Aarhus University

( ) i 0. Outline. Regular languages revisited. Introduction to Parsing. Parser overview. Context-free grammars (CFG s) Lecture 5.

LR Parsing, Part 2. Constructing Parse Tables. An NFA Recognizing Viable Prefixes. Computing the Closure. GOTO Function and DFA States

Introduction to Bottom-Up Parsing

Compiler Design 1. Bottom-UP Parsing. Goutam Biswas. Lect 6

Syntax Analysis. Amitabha Sanyal. ( as) Department of Computer Science and Engineering, Indian Institute of Technology, Bombay

SYED AMMAL ENGINEERING COLLEGE (An ISO 9001:2008 Certified Institution) Dr. E.M. Abdullah Campus, Ramanathapuram

CS143 Handout 20 Summer 2011 July 15 th, 2011 CS143 Practice Midterm and Solution

G53CMP: Lecture 4. Syntactic Analysis: Parser Generators. Henrik Nilsson. University of Nottingham, UK. G53CMP: Lecture 4 p.1/32

CS453 : JavaCUP and error recovery. CS453 Shift-reduce Parsing 1

Parsing: Derivations, Ambiguity, Precedence, Associativity. Lecture 8. Professor Alex Aiken Lecture #5 (Modified by Professor Vijay Ganesh)

Outline. The strategy: shift-reduce parsing. Introduction to Bottom-Up Parsing. A key concept: handles

Lecture Notes on Shift-Reduce Parsing

Introduction to Parsing. Lecture 8

LALR Parsing. What Yacc and most compilers employ.

Syntax Analysis: Context-free Grammars, Pushdown Automata and Parsing Part - 4. Y.N. Srikant

Parsing. Roadmap. > Context-free grammars > Derivations and precedence > Top-down parsing > Left-recursion > Look-ahead > Table-driven parsing

CS415 Compilers. LR Parsing & Error Recovery

LR Parsing - The Items

Top-Down Parsing and Intro to Bottom-Up Parsing. Lecture 7

Configuration Sets for CSX- Lite. Parser Action Table

Outline. Limitations of regular languages. Introduction to Parsing. Parser overview. Context-free grammars (CFG s)

Outline CS412/413. Administrivia. Review. Grammars. Left vs. Right Recursion. More tips forll(1) grammars Bottom-up parsing LR(0) parser construction

4. Lexical and Syntax Analysis

Top-Down Parsing and Intro to Bottom-Up Parsing. Lecture 7

4. Lexical and Syntax Analysis

MODULE 14 SLR PARSER LR(0) ITEMS

CSCI312 Principles of Programming Languages

Abstract Syntax Trees & Top-Down Parsing

Abstract Syntax Trees & Top-Down Parsing

Downloaded from Page 1. LR Parsing

Using an LALR(1) Parser Generator

Transcription:

Outline LR Parsing LALR Parser Generators Review of bottom-up parsing Computing the parsing DFA Using parser generators 2 Bottom-up Parsing (Review) A bottom-up parser rewrites the input string to the start symbol The state of the parser is described as α I γ α is a stack of terminals and non-terminals γ is the string of terminals not yet examined Initially: I x 1 x 2... x n The Shift and Reduce Actions (Review) Recall the CFG: () A bottom-up parser uses two kinds of actions: Shift pushes a terminal from input on the stack ( I ) ( I ) Reduce pops 0 or more symbols off of the stack (production RHS) and pushes a nonterminal on the stack (production LHS) ( ( ) I ) ( I ) 3 4

Key Issue: When to Shift or Reduce? Idea: use a deterministic finite automaton (DFA) to decide when to shift or reduce The input is the stack The language consists of terminals and non-terminals We run the DFA on the stack and we examine the resulting state X and the token tok after I If X has a transition labeled tok then shift If X is labeled with A βon tok then reduce 5 accept on $ LR(1) Parsing: An xample 0 1 2 3 ( 4 7 ) () on $, 8 10 6 ) ( 5 11 on $, 9 on ), () on ), I () ()$ shift I () ()$ I () ()$ shift (x3) ( I ) ()$ ( I ) ()$ shift () I ()$ () I ()$ shift (x3) ( I )$ ( I )$ shift () I $ () I $ accept Representing the DFA Parsers represent the DFA as a 2D table Recall table-driven lexical analysis Lines correspond to DFA states Columns correspond to terminals and nonterminals Typically columns are split o: Those for terminals: the action table Those for non-terminals: the goto table Representing the DFA: xample The table for a fragment of our DFA: ( 3 4 6 ) 7 () on $, 5 on ), ( ) $ 3 s4 4 s5 g6 5 r r 6 s8 s7 7 r () r () sk is shift and goto state k r X α is reduce gk is goto state k 7 8

The LR Parsing Algorithm After a shift or reduce action we rerun the DFA on the entire stack This is wasteful, since most of the work is repeated Remember for each stack element on which state it brings the DFA LR parser maains a stack sym 1, state 1... sym n, state n state k is the final state of the DFA on sym 1 sym k The LR Parsing Algorithm let I = w$ be initial input let j = 0 let DFA state 0 be the start state let stack = dummy, 0 repeat case action[top_state(stack), I[j]] of shift k: push I[j], k reduce X A: pop A pairs, push X, goto[top_state(stack), X] accept: halt normally error: halt and report error 9 10 Key Issue: How is the DFA Constructed? The stack describes the context of the parse What non-terminal we are looking for What production RHS we are looking for What we have seen so far from the RHS ach DFA state describes several such contexts.g., when we are looking for non-terminal, we might be looking either for an or an () RHS LR(0) Items An LR(0) item is a production with a I somewhere on the RHS The items for T () are T I () T ( I ) T ( I ) T () I The only item for X εis X I 11 12

LR(0) Items: Intuition An item [X α I β] says that the parser is looking for an X it has an α on top of the stack xpects to find a string derived from β next in the input Notes: [X αi aβ] means that a should follow. Then we can shift it and still have a viable prefix [X α I] means that we could reduce X But this is not always a good idea! LR(1) Items An LR(1) item is a pair: X α I β, a X αβis a production a is a terminal (the lookahead terminal) LR(1) means 1 lookahead terminal [X α I β, a] describes a context of the parser We are trying to find an X followed by an a, and We have (at least) α already on top of the stack Thus we need to see next a prefix derived from βa 13 14 Note The symbol I was used before to separate the stack from the rest of input α I γ, where α is the stack and γ is the remaining string of terminals In items I is used to mark a prefix of a production RHS: X α I β, a Here β might contain terminals as well In both case the stack is on the left of I Convention We add to our grammar a fresh new start symbol S and a production S Where is the old start symbol The initial parsing context contains: S I, $ Trying to find an S as a string derived from $ The stack is empty 15 16

LR(1) Items (Cont.) In context containing I ( ), If ( follows then we can perform a shift to context containing ( I ), In context containing ( ) I, We can perform a reduction with ( ) But only if a follows LR(1) Items (Cont.) Consider the item ( I ), We expect a string derived from ) There are two productions for and ( ) We describe this by extending the context with two more items: I, ) I ( ), ) 17 18 The Closure Operation Constructing the Parsing DFA (1) The operation of extending the context with items is called the closure operation Closure(Items) = repeat for each [X α I Yβ, a] in Items for each production Y γ for each b in First(βa) add [Y I γ, b] to Items until Items is unchanged Construct the start context: Closure({S I, $}) We abbreviate as: S I, $ I (), $ I, $ I (), I, S I, $ I (), $/ I, $/ ( ) 19 20

Constructing the Parsing DFA (2) A DFA state is a closed set of LR(1) items The start state contains [S I, $] A state that contains [X α I, b] is labelled with reduce with X αon b And now the transitions The DFA Transitions A state State that contains [X α I yβ, b] has a transition labeled y to a state that contains the items Transition(State, y) y can be a terminal or a non-terminal Transition(State, y) Items = for each [X α I yβ, b] in State add [X αy I β, b] to Items return Closure(Items) 21 22 Constructing the Parsing DFA: xample LR Parsing Tables: Notes S I, $ 0 I (), $/ I, $/ 2 S I, $ I (), $/ accept on $ 6 ( I ), $/ I (), )/ ) 1 I, $/ I (), $/ (I ), $/ I (), )/ I, )/ ( on $, 3 4 5 I, )/ on ), Parsing tables (i.e., the DFA) can be constructed automatically for a CFG But we still need to understand the construction to work with parser generators.g., they report errors in terms of sets of items What kind of errors can we expect? and so on 23 24

Shift/Reduce Conflicts If a DFA state contains both [X α I aβ, b] and [Y γ I, a] Then on input a we could either Shift o state [X αa I β, b], or Reduce with Y γ This is called a shift-reduce conflict Shift/Reduce Conflicts Typically due to ambiguities in the grammar Classic example: the dangling else S if then S if then S else S OTHR Will have DFA state containing [S if then S I, else] [S if then S I else S, x] If else follows then we can shift or reduce Default (yacc, ML-yacc, etc.) is to shift Default behavior is as needed in this case 25 26 More Shift/Reduce Conflicts Consider the ambiguous grammar * We will have the states containing [ * I, ] [ * I, ] [ I, ] [ I, ] Again we have a shift/reduce on input We need to reduce (* binds more tightly than ) Recall solution: declare the precedence of * and More Shift/Reduce Conflicts In yacc declare precedence and associativity: %left %left * Precedence of a rule = that of its last terminal See yacc manual for ways to override this default Resolve shift/reduce conflict with a shift if: no precedence declared for either rule or terminal input terminal has higher precedence than the rule the precedences are the same and right associative 27 28

Using Precedence to Solve S/R Conflicts Back to our example: [ * I, ] [ * I, ] [ I, ] [ I, ] Will choose reduce because precedence of rule * is higher than of terminal Using Precedence to Solve S/R Conflicts Same grammar as before * We will also have the states [ I, ] [ I, ] [ I, ] [ I, ] Now we also have a shift/reduce on input We choose reduce because and have the same precedence and is left-associative 29 30 Using Precedence to Solve S/R Conflicts Back to our dangling else example [S if then S I, else] [S if then S I else S, x] Can eliminate conflict by declaring else having higher precedence than then But this starts to look like hacking the tables Best to avoid overuse of precedence declarations or we will end with unexpected parse trees Precedence Declarations Revisited The term precedence declaration is misleading! These declarations do not define precedence: they define conflict resolutions I.e., they instruct shift-reduce parsers to resolve conflicts in certain ways The two are not quite the same thing! 31 32

Reduce/Reduce Conflicts If a DFA state contains both [X α I, a] and [Y β I, a] Then on input a we don t know which production to reduce This is called a reduce/reduce conflict Reduce/Reduce Conflicts Usually due to gross ambiguity in the grammar xample: a sequence of identifiers S ε id id S There are two parse trees for the string id S id S id S id How does this confuse the parser? 33 34 More on Reduce/Reduce Conflicts Consider the states [S id I, $] [S I S, $] [S id I S, $] [S I, $] id [S I, $] [S I id, $] [S I id, $] [S I id S, $] [S I id S, $] Reduce/reduce conflict on input $ S S id S S id S id Better rewrite the grammar as: S ε id S Using Parser Generators Parser generators automatically construct the parsing DFA given a CFG Use precedence declarations and default conventions to resolve conflicts The parser algorithm is the same for all grammars (and is provided as a library function) But most parser generators do not construct the DFA as described before Because the LR(1) parsing DFA has 1000s of states even for a simple language 35 36

LR(1) Parsing Tables are Big The Core of a Set of LR Items But many states are similar, e.g. 1 I, $/ on $, and 5 I, )/ on ), Definition: The core of a set of LR items is the set of first components Without the lookahead terminals Idea: merge the DFA states whose items differ only in the lookahead tokens We say that such states have the same core We obtain 1 I, $//) on $,, ) xample: the core of {[X α I β, b], [Y γ I δ, d]} is {X α I β, Y γ I δ} 37 38 LALR States Consider for example the LR(1) states {[X α I, a], [Y β I, c]} {[X α I, b], [Y β I, d]} They have the same core and can be merged And the merged state contains: {[X α I, a/b], [Y β I, c/d]} These are called LALR(1) states Stands for LookAhead LR Typically 10 times fewer LALR(1) states than LR(1) A LALR(1) DFA Repeat until all states have distinct core Choose two distinct states with same core Merge the states by creating a new one with the union of all the items Po edges from predecessors to new state New state pos to all the previous successors A D B C F A D B C F 39 40

Conversion LR(1) to LALR(1): xample. The LALR Parser Can Have Conflicts ( 2 3 4 accept on $ 7 () on $, 8 ) 0 1 10 6 ) ( 11 5 on $, 9 on ), () on ), accept on $ () on $,, ) 0 1,5 ( 2 3,8 4,9 7,11 ) 6,10 on $,, ) Consider for example the LR(1) states {[X α I, a], [Y β I, b]} {[X α I, b], [Y β I, a]} And the merged LALR(1) state {[X α I, a/b], [Y β I, a/b]} Has a new reduce/reduce conflict In practice such cases are rare 42 LALR vs. LR Parsing: Things to keep in mind A Hierarchy of Grammar Classes LALR languages are not natural They are an efficiency hack on LR languages Any reasonable programming language has a LALR(1) grammar LALR(1) parsing has become a standard for programming languages and for parser generators From Andrew Appel, Modern Compiler Implementation in ML 43 44

Semantic Actions in LR Parsing We can now illustrate how semantic actions are implemented for LR parsing Keep attributes on the stack On shifting a, push attribute for a on stack On reduce X α pop attributes for α compute attribute for X and push it on the stack Performing Semantic Actions: xample Recall the example T 1 {.val = T.val 1.val } T {.val = T.val } T * T 1 { T.val =.val * T 1.val } { T.val =.val } Consider the parsing of the string: 4 * 9 6 45 46 Performing Semantic Actions: xample 4 * 9 6 * shift 4 * shift 4 * shift 4 * 9 reduce T 4 * T 9 reduce T * T T 36 shift T 36 shift T 36 6 reduce T T 36 T 6 reduce T T 36 6 reduce T 42 accept Notes The previous example shows how synthesized attributes are computed by LR parsers It is also possible to compute inherited attributes in an LR parser 47 48

Notes on Parsing Parsing A solid foundation: context-free grammars A simple parser: LL(1) A more powerful parser: LR(1) An efficiency hack: LALR(1) LALR(1) parser generators Next time we move on to semantic analysis Supplement to LR Parsing Strange Reduce/Reduce Conflicts due to LALR Conversion (and how to handle them) 49 Strange Reduce/Reduce Conflicts Consider the grammar S P R, NL N N, NL P T NL : T R T N : T N id T id P - parameters specification R - result specification N - a parameter or result name T -a type name NL - a list of names Strange Reduce/Reduce Conflicts In P an id is a N when followed by, or : T when followed by id In R an id is a N when followed by : T when followed by, This is an LR(1) grammar But it is not LALR(1). Why? For obscure reasons 51 52

A Few LR(1) States What Happened? P I T id P I NL : T id T I id id NL I N : NL I N, NL : N I id : N I id, R I T, R I N : T, 1 T id I id 3 id N id I : N id I, LALR merge 2 id T id I, N id I : 4 LALR reduce/reduce conflict on, T id I id/, N id I :/, Two distinct states were confused because they have the same core Fix: add dummy productions to distinguish the two confused states.g., add R id bogus bogus is a terminal not used by the lexer This production will never be used during parsing But it distinguishes R from P T I id, N I id : 53 54 A Few LR(1) States After Fix P I T id 1 P I NL : T id T id I id 3 NL I N : id N id I : NL I N, NL : N id I, N I id : N I id, Different cores no LALR merging T I id id R. T, T id I, 4 R. N : T, 2 N id I : R.id bogus, id R id I bogus, T. id, N. id : 55