Big Picture: Compilation Process. CSCI: 4500/6500 Programming Languages. Big Picture: Compilation Process. Big Picture: Compilation Process.

Similar documents
Big Picture: Compilation Process. CSCI: 4500/6500 Programming Languages. Big Picture: Compilation Process. Big Picture: Compilation Process

Lex and YACC primer/howto

Lex and YACC primer/howto

Syntax Analysis Part IV

LECTURE 11. Semantic Analysis and Yacc

PRACTICAL CLASS: Flex & Bison

Using Lex or Flex. Prof. James L. Frankel Harvard University

flex is not a bad tool to use for doing modest text transformations and for programs that collect statistics on input.

Edited by Himanshu Mittal. Lexical Analysis Phase

Lecture Outline. COMP-421 Compiler Design. What is Lex? Lex Specification. ! Lexical Analyzer Lex. ! Lex Examples. Presented by Dr Ioanna Dionysiou

An Introduction to LEX and YACC. SYSC Programming Languages

Lex & Yacc. by H. Altay Güvenir. A compiler or an interpreter performs its task in 3 stages:

Lex & Yacc. By H. Altay Güvenir. A compiler or an interpreter performs its task in 3 stages:

Introduction to Lex & Yacc. (flex & bison)

EXPERIMENT NO : M/C Lenovo Think center M700 Ci3,6100,6th Gen. H81, 4GB RAM,500GB HDD

CS4850 SummerII Lex Primer. Usage Paradigm of Lex. Lex is a tool for creating lexical analyzers. Lexical analyzers tokenize input streams.

Lexical and Parser Tools

The structure of a compiler

Introduction to Yacc. General Description Input file Output files Parsing conflicts Pseudovariables Examples. Principles of Compilers - 16/03/2006

CSC 467 Lecture 3: Regular Expressions

Lex & Yacc (GNU distribution - flex & bison) Jeonghwan Park

Yacc: A Syntactic Analysers Generator

Marcello Bersani Ed. 22, via Golgi 42, 3 piano 3769

EXPERIMENT NO : M/C Lenovo Think center M700 Ci3,6100,6th Gen. H81, 4GB RAM,500GB HDD

Module 8 - Lexical Analyzer Generator. 8.1 Need for a Tool. 8.2 Lexical Analyzer Generator Tool

CS143 Handout 04 Summer 2011 June 22, 2011 flex In A Nutshell

An introduction to Flex

COMPILER CONSTRUCTION LAB 2 THE SYMBOL TABLE. Tutorial 2 LABS. PHASES OF A COMPILER Source Program. Lab 2 Symbol table

Compiler Lab. Introduction to tools Lex and Yacc

A Bison Manual. You build a text file of the production (format in the next section); traditionally this file ends in.y, although bison doesn t care.

Using an LALR(1) Parser Generator

Chapter 3 -- Scanner (Lexical Analyzer)

Lexical Analysis. Implementing Scanners & LEX: A Lexical Analyzer Tool

Compil M1 : Front-End

Lexical and Syntax Analysis

CSE302: Compiler Design

Lexical and Syntax Analysis

Flex and lexical analysis. October 25, 2016

Flex and lexical analysis

Automatic Scanning and Parsing using LEX and YACC

TDDD55- Compilers and Interpreters Lesson 2

Type 3 languages. Regular grammars Finite automata. Regular expressions. Deterministic Nondeterministic. a, a, ε, E 1.E 2, E 1 E 2, E 1*, (E 1 )

TDDD55 - Compilers and Interpreters Lesson 3

Concepts Introduced in Chapter 3. Lexical Analysis. Lexical Analysis Terms. Attributes for Tokens

Principles of Programming Languages

CS143 Handout 12 Summer 2011 July 1 st, 2011 Introduction to bison

LEX/Flex Scanner Generator

CS 426 Fall Machine Problem 1. Machine Problem 1. CS 426 Compiler Construction Fall Semester 2017

Applications of Context-Free Grammars (CFG)

Prof. Mohamed Hamada Software Engineering Lab. The University of Aizu Japan

Preparing for the ACW Languages & Compilers

Syntax-Directed Translation

Concepts. Lexical scanning Regular expressions DFAs and FSAs Lex. Lexical analysis in perspective

Figure 2.1: Role of Lexical Analyzer

CSCI Compiler Design

Handout 7, Lex (5/30/2001)

PRINCIPLES OF COMPILER DESIGN UNIT II LEXICAL ANALYSIS 2.1 Lexical Analysis - The Role of the Lexical Analyzer

File I/O in Flex Scanners

Parsing. COMP 520: Compiler Design (4 credits) Professor Laurie Hendren.

COMPILERS AND INTERPRETERS Lesson 4 TDDD16

Parsing How parser works?

Gechstudentszone.wordpress.com

Prof. Mohamed Hamada Software Engineering Lab. The University of Aizu Japan

CSE302: Compiler Design

Gechstudentszone.wordpress.com

Lexical Analyzer Scanner

TDDD55- Compilers and Interpreters Lesson 3

Programming Assignment II

Chapter 4. Lexical analysis. Concepts. Lexical scanning Regular expressions DFAs and FSAs Lex. Lexical analysis in perspective

IBM. UNIX System Services Programming Tools. z/os. Version 2 Release 3 SA

Lexical Analyzer Scanner

Ray Pereda Unicon Technical Report UTR-02. February 25, Abstract

UNIT III & IV. Bottom up parsing

A program that performs lexical analysis may be termed a lexer, tokenizer, or scanner, though scanner is also a term for the first stage of a lexer.

Lexical Analysis. Lexical analysis is the first phase of compilation: The file is converted from ASCII to tokens. It must be fast!

CSCI312 Principles of Programming Languages!

Yacc Yet Another Compiler Compiler

COLLEGE OF ENGINEERING, NASHIK. LANGUAGE TRANSLATOR

Programming Assignment I Due Thursday, October 7, 2010 at 11:59pm

COMPILER CONSTRUCTION Seminar 02 TDDB44

Chapter 3 Lexical Analysis

CS 297 Report. By Yan Yao

Etienne Bernard eb/textes/minimanlexyacc-english.html

As we have seen, token attribute values are supplied via yylval, as in. More on Yacc s value stack

Parsing and Pattern Recognition

About the Tutorial. Audience. Prerequisites. Copyright & Disclaimer. Compiler Design

Compiler course. Chapter 3 Lexical Analysis

Syn S t yn a t x a Ana x lysi y s si 1

Ulex: A Lexical Analyzer Generator for Unicon

Context-free grammars

Programming Assignment I Due Thursday, October 9, 2008 at 11:59pm

Parser Tools: lex and yacc-style Parsing

Lab 2. Lexing and Parsing with Flex and Bison - 2 labs

Lesson 10. CDT301 Compiler Theory, Spring 2011 Teacher: Linus Källberg

Part 5 Program Analysis Principles and Techniques

CD Assignment I. 1. Explain the various phases of the compiler with a simple example.

Lexical Analysis. Chapter 1, Section Chapter 3, Section 3.1, 3.3, 3.4, 3.5 JFlex Manual

HW8 Use Lex/Yacc to Turn this: Into this: Lex and Yacc. Lex / Yacc History. A Quick Tour. if myvar == 6.02e23**2 then f(..!

Compiler Construction: Parsing

LECTURE 7. Lex and Intro to Parsing

Transcription:

Big Picture: Compilation Process Source program CSCI: 4500/6500 Programming Languages Lex & Yacc Scanner Lexical Lexical units, token stream Parser Syntax Intermediate Parse tree Code Generator Semantic Abstract syntax tree or other intermediate form Code Generator Machine Language Computer Big Picture: Compilation Process Big Picture: Compilation Process Scanner Source program Lexical Lexical units, token stream Scanner Source program Lexical Lexical units, token stream Parser Parser Symbol Table Syntax Intermediate Parse tree Code Generator Semantic Optimizer (optional) Syntax Parse tree Abstract syntax tree or other intermediate form Code Generator Machine Language Computer Code Generator Computer Machine Language Big Picture: Compilation Process Overview Scanner Lexical Source program a = b + c * d file.y yacc (yyparse) y.tab.c Source in language Hello.maria Lexical units, token stream id1 = id2 + id3 * id4 y.tab.h gcc MariasNewCompiler.exe Parser Syntax Code Generator Computer Parse tree Machine/Assembly Language = id1 + * id2 id3 load id3 mul id4 add id2 store id1 id4 file.l lex lex.yy.c (yylex)! Pattern matching rules for tokens in file.l» Tells lex what the strings/symbols of the language look like and so it can convert them to tokens (enters them into the symbol table, with attributes, such as data type, e.g. integer) which yacc understands! Grammar rules for language in file.y» Tells yacc what the grammar rules are so it can analyze the tokens that it got from lex and creates a syntax tree. Compiled output

Lex and Yacc What is Lex?! Lex and Yacc are tools for generating language parsers! The each do a single function! This is what they do:» Get a token from the input stream and» Parse a sequence of tokens to see if it matches a grammar! Yacc generated parser» calls Lex generated tokenizer function (yylex())) each time it wants a token,» You can define actions for particular grammar rules. For example, it can print the string match if the input matched the grammar it expected (or something more complex)! Lex is a lexical analyzer generator (tokenizer). It automatically generates a lexer or scanner given a lex specification file as input (.l file)! Breaks up the input stream into tokens» For example consider breaking up a file containing the story Moby Dick into individual words Ignore white space Ignore punctuation! Generates a C source file, e.g. maria.c» contains a function called yylex() that obtains the next valid token in the input stream.» this source file can then be compiled by the C compiler to machine or assembly code. Lex Syntax! Lex input file consist of up to three sections» Lex and C definition section that can be used in the middle section. C definitions are wrapped in % and» Pattern action pairs, where the pattern is a regular expression and the action is in C syntax definitions.» Supplementary C-routines (later) % #include <stdio> Stop Start printf( Stop command received ) printf( Start command received ) patterns subroutines rules Lex Syntax! Lex input file consist of up to three sections %!.c is generated after running! This part will be embedded into *.c! Substitutions, code and start states will be copied into *.c! Define how to scan and what action to take for each lexeme that translates to a token! Any user code for example main() calls yylex() one is provided by default. % Stop printf( Stop command received ) Start printf( Start command received ) % [01234567890]+ printf( NUMBER\n ) [a-za-z][a-za-z0-9]* printf( WORD\n ) atlas:maria:255 flex -l -t example1.l > example1.c atlas:maria:257 gcc example1.c -o example1 -lfl atlas:maria:261 example1 hello hello Start Start command received atlas:maria:422 flex -l -t example2.l > example2.c atlas:maria:423 gcc example2.c -o example2 -lfl atlas:maria:424 example2 hello WORD 5lkfsj NUMBER WORD lkjklj3245 WORD

category lame-servers null category cname null type hint file /etc/bind/db.root % [a-za-z][a-za-z0-9]* printf("word ") [a-za-z0-9\/.-]+ printf("filename ") \" printf("quote ") \ printf("obrace ") \ printf("ebrace ") printf("semicolon ") \n printf("\n") [ \t]+ /* ignore whitespace */ category lame-servers null category cname null type hint file /etc/bind/db.root % [a-za-z][a-za-z0-9]* printf("word ") [a-za-z0-9\/.-]+ printf("filename ") \" printf("quote ") \ printf("obrace ") \ printf("ebrace ") printf("semicolon ") \n printf("\n") [ \t]+ /* ignore whitespace */ atlas:maria:470 example3 < input3.txt WORD OBRACE WORD FILENAME OBRACE WORD SEMICOLON EBRACE SEMICOLON WORD WORD OBRACE WORD SEMICOLON EBRACE SEMICOLON EBRACE SEMICOLON WORD QUOTE FILENAME QUOTE OBRACE WORD WORD SEMICOLON WORD QUOTE FILENAME QUOTE SEMICOLON EBRACE SEMICOLON Regular Expressions in Lex Regular Expressions in Lex! Operators:» \ [ ] ^ -?. * + ( ) $ / % < >» Use escape to use an operator as a character, the escape character is \» Examples: \$ = $ \\ = \! [ ]: Defines character classes» [ab] a or b» [a-z] a or b or c or z» [^a-za-z] any character that is NOT a letter!. :» Matches all characters except newline (\n)! A? A* A+:» 0 or one instance of A, 0 or more, 1 or more instances of A! Examples:» ab?c ac or abc» [a-z]+ Lowercase strings» [a-za-z][a-za-z0-9] Alphanumeric strings, maria1, maria2 Regular Expressions Pattern Matching Primitives! Order of precedence» Kleene *,?, +» Concatenation» Alternation ( )» All operators are left associative» Example: a*b cd* = ((a*)b) (c(d*))! [ \t\n]» no action defined for this matched, so lex just ignores that input. Metacharacter. \n * +? ^ $ a b (ab)+ [ab] a3 a+b Matches any character except newline newline zero or more copies of the preceding expression one or more copies of the preceding expression zero or one copy of the preceding expression beginning of line / complement end of line a or b one or more copies of ab (grouping) a or b 3 instances of a literal a+b

Lex predefined variables Lex Library Routines yytext String containing the lexeme yylex() Default main() calls yylex() yyleng Length of the string yymore() Returns next token yyin Input stream pointer FILE * yyless(n) Retain the first n characters in yytext yyout Output stream pointer FILE * yywarp() Called at end of file EOF, returns 1 is default [a-za-z]+ words++ chars += yyleng More Lex Summary LEX! A regular expression in Lex finishes with a space, tab or newline! The choices of Lex:» Lex always matches the longest (number of characters) lexeme possible.» If two or more lexemes are of the same length, then Lex chooses the lexeme corresponding to the first regular expression.! Lex is a scanner generator» generates C code (or C++ code) to scan inputs for lexemes (string of the language) and convert them to tokens! Lex defines the tokens by using an input file (specification file) that specifies the lexemes as regular expressions! Regular expressions in the specification file terminates with» space, newline or tab! Rules:» Longest possible match first,» Same length, picks the expression that is specified first What is Yacc? Yacc File Format! A tool that automatically generates a parser according to given specifications.» Yacc is higher-level that lex, e.g. deals with sentence instead of words.» Yacc specification given in a specification file, typically post fixed with a y, so a.y file.! Parses input streams coming from lex that now consisting of tokens (these tokens may have values associated with them, more on this shortly).! Yacc uses grammar rules that allow it to analyze if the stream of tokens from lex is legal.» Yacc creates a syntax tree %... C declarations...... yacc declarations...... rules...... Subroutines...

Yacc Syntax! Yacc input file consist of up to three sections Declarations Rules Subroutines %!.c is generated after running! This part will be embedded into *.c! Contains token declarations. These are the tokens that are recognized by lex (y.tab.h)! Define how to understand the language and what actions to take for each sentence! Any user code for example main() calls yyparse() one is provided by default. The Yacc Specification File (.y)! Definitions» Declarations of tokens %token name» Type of values used on parser stack (if different from default type (INTEGER)).! Rules These definitions are then defined in a header file, y.tab.h that lex includes.» Lists grammar rules with corresponding routines! User Code The Rule Section Example production : symbol1 symbol2 action symbol3 symbol4 action! Give me a rule in a grammar that include expressions that add or subract 2 NUMBERS.! Assume NUMBERS are already defined. production: symbol1 symbol2 action Example Rule Lets do a Simpler Example statement : expression printf ( = %g\n, $1) expression : expression + expression $$ = $1 + $3 expression - expression $$ = $1 - $3 number $$ = $1 statement According these two productions, expression expression expression 5 + 4 3 + 2 is parsed into: number expression expression number expression expression number number 5 + 4-3 + 2! Create a simple language to control a thermostat! Need:» 2 states that are set to on or off» Target» Temperature» Number! Look at a lexer code, example4.l next temperature 22 New temperature set! Tokens: on off temperature number

Controlling a Thermostat: Lex input Yacc Rule Section Controlling a Thermostat: Yacc input Tokens: on off temperature number temperature 22 New temperature set! % example4.l #include example4.tab.h /* generated from yacc */ extern YYSTYPE yylval /* need for lex/yacc */ [0-9]+ return NUMBER return TOKHEAT on off return STATE return TOKTARGET temperature return TOKTEMPERATURE \n /* ignore end of line */ [ \t]+ /* ignore whitespace */ Tokens are fed (returned) to yacc y.tab.h defines the tokens temperature 22 New temperature set! Tokens: on off temperature number commands: /* empty */ commands command command: _switch _set _switch: TOKHEAT STATE printf("\theat turned on or off\n") _set: TOKTARGET TOKTEMPERATURE NUMBER printf("\ttemperature set\n") example4.y Controlling a Thermostat: The rest of yacc specifications! If components in a rule is empty, it means that the result can match the empty string. For example, how would you define a comma-separated sequence of zero or more letters?» A, B, C, D,. E startlist: /* empty */ letterlist letterlist: letter letterlist, letter! Left recursion better on Bison because it can then use a bounded stack (we will see why shortly) temperature 22 New temperature set! Tokens: on off temperature number % #include <string.h> void yyerror(const char *str) fprintf(stderr,"error: %s\n",str) int yywrap() /* called at EOF can open another then return 0 */ return 1 main() yyparse() /* get things started */ example4.y %token NUMBER TOKHEAT STATE TOKTARGET TOKTEMPERATURE Flex, Yacc and Run example4.l example4.y! yyerror() called when yacc finds an error! yywrap() used if reading from multiple files, yywrap() called at EOF, so you can open up another file and return 0. OR you return 1 to indicate nope you are really done. atlas:maria:194 flex -l -t example4.l > example4l.c atlas:maria:195 bison -d example4.y rm example4.tab.c atlas:maria:196 bison -v example4.y -o example4y.c atlas:maria:197 gcc example4l.c example4y.c -o example4 atlas:maria:202 example4 Heat turned on or off Heat turned on or off temperature 10 Temperature set temparature 10 error: parse error Notice: No link command to gcc, i.e. no -lfl Do you know why?

atlas:maria:202 example4 Heat turned on or off Heat turned on or off temperature 10 Temperature set temperature 10 Want this temperature 22 New temperature set! Add parameters to yacc! When lex matches a :» The matched string is in yytext and» To communicate a value to yacc you can use the variable yylval (so far we have not seen how to do this, but we will now). temperature 22 Temperature set to 22 Controlling a Thermostat: Lex input temperature 22 Temperature set to 22 Tokens: on off temperature number % example5.l #include example4.tab.h /* generated from yacc */ extern YYSTYPE yylval [0-9]+ yylval=atoi(yytext) return NUMBER return TOKHEAT on off yylval=!strcmp(yytext,"on") return STATE return TOKTARGET temperature return TOKTEMPERATURE \n /* ignore end of line */ [ \t]+ /* ignore whitespace */ Tokens are fed (returned) to yacc y.tab.h defines the tokens temperature 22 Temperature set to 22 % example5.l #include example4.tab.h /* generated from yacc */ extern YYSTYPE yylval [0-9]+ yylval=atoi(yytext) return NUMBER return TOKHEAT on off yylval=!strcmp(yytext,"on") return STATE return TOKTARGET temperature return TOKTEMPERATURE \n /* ignore end of line */ [ \t]+ /* ignore whitespace */ _switch: TOKHEAT STATE if($2) else example5.y printf("\theat turned on\n") printf("\theat turned off\n") temperature 22 Temperature set to 22 % example5.l #include example4.tab.h /* generated from yacc */ extern YYSTYPE yylval [0-9]+ yylval=atoi(yytext) return NUMBER return TOKHEAT on off yylval=!strcmp(yytext,"on") return STATE return TOKTARGET temperature return TOKTEMPERATURE \n /* ignore end of line */ [ \t]+ /* ignore whitespace */ _set: example5.y TOKTARGET TOKTEMPERATURE NUMBER printf( \ttemperature set to %d\n, $3) _switch: example5.y TOKHEAT STATE if($2) printf("\theat turned on\n") else printf("\theat turned off\n") _set: TOKTARGET TOKTEMPERATURE NUMBER printf("\ttemperature set to %d\n",$3) atlas:maria:248 flex -l -t example5.l > example5l.c atlas:maria:249 bison -v example5.y -o example5y.c atlas:maria:250 bison -d example5.y rm example5.tab.c rm: remove example5.tab.c (yes/no)? yes atlas:maria:251 gcc example5l.c example5y.c -o example5 atlas:maria:253 example5 Heat turned on temperature 10 Temperature set to 10

Continue lame-servers! Write YACC grammar! Need to translate the lexer so that it returns values to YACC (e.g. file names and zone names) example6.l category lame-servers null category cname null type hint file /etc/bind/db.root category lame-servers null category cname null type hint file /etc/bind/db.root % #include example5.tab.h" extern YYSTYPE yylval % zone return ZONETOK file return FILETOK [a-za-z][a-za-z0-9]* yylval = strdup(yytext) return WORD [a-za-z0-9\/.-]+ yylval = strdup(yytext) return FILENAME \" return QUOTE \ return OBRACE \ return EBRACE return SEMICOLON \n /* ignore EOL */ [ \t]+ /* ignore whitespace */ example6.l [a-za-z][a-za-z0-9]* printf("word ") [a-za-z0-9\/.-]+ printf("filename ") \" printf("quote ") \ printf("obrace ") \ printf("ebrace ") printf("semicolon ") \n printf("\n") [ \t]+ /* ignore whitespace */ example6.y category lame-servers null category cname null type hint #define YYSTYPE char * file /etc/bind/db.root other routines commands: commands command SEMICOLON command: zone_set zone_set: ZONETOK quotedname zonecontent printf( Complete zone for %s found\n, $2 ) zonecontent: OBRACE zonestatements EBRACE quotedname: QUOTE FILENAME QUOTE $$=$2 example6.y category lame-servers null category cname null type hint file /etc/bind/db.root! quotedname s value is without the quotes,» $$=$2! Generic statements to catch statements within the zone block example6.y category lame-servers null category cname null type hint file /etc/bind/db.root! Generic statements to catch block statements too example6.y category lame-servers null category cname null type hint file /etc/bind/db.root zonestatements: zonestatements zonestatement SEMICOLON zonestatement: statements FILETOK quotedname printf( A zonefile name %s was encountered\n, $2 ) block: OBRACE zonestatements EBRACE SEMICOLON statements: statements statement statement: WORD block quotedname

Flex, Yacc and Run example6.l example6.y Summary atlas:maria:194 flex -l -t example6.l > example6l.c atlas:maria:195 bison -d example6.y rm example6.tab.c atlas:maria:196 bison -v example6.y -o example6y.c atlas:maria:197 gcc example6l.c example6y.c -o example6 type hint file /etc/bind/db.root atlas:maria:202 example6 < input6.txt A zonefile name '/etc/bind/db.root' was encountered Complete zone for '.' found! YACC file you write your own main() which calls yyparse()» yyparse() is created by YACC and ends up in y.tab.c! yyparse() reads a stream of token/value pairs from yylex()» Code yylex() yourself or have lex do it for you! yylex() returns an integer value representing a token type you can optionally define a value for the token in yylval (default int)» tokens have numeric id s starting from 256 Yacc Yacc Theory! Uses bottom up shift/reduce parsing» Gets a token» Pushes token onto stack! Next time! Recall grammars for yacc are a variant of BNF» can be used to express context free languages X! p» X is non terminal, p is a string of non-terminals and/or terminals)» Context free because X can be replaced by p regardless of the context X is in.! Example: context free grammar of different number of a and b s Yacc Theory! Example: context free grammar of different number of a and b s (M for mixed, A for more a s)» S! A B» A! MaA MaM» B! MbB MbM» M! ambm bmam "! T generates all the string of the same number of a and B s, A generates more a s and b s B more b s.! Example: Grammar that multiply and adds numbers: E! E + E (rule 1) E! E * E (rule 2) E! id (rule 3)! id is returned by lex (terminals) and only appears on right hand side. x + y * z is generated by: E! E * E (rule 2)! E * z (rule 3)! E + E * z (rule 3)! E + y * z (rule 3)! x + y * z (rule 3)

E! E + E (rule 1) E! E * E (rule 2) E! id (rule 3) E! E + E (rule 1) E! E * E (rule 2) E! id (rule 3)! To parse the expression we go in reverse, reduce an expression to a single non terminal, We do this by shift-reduce parsing and use a stack for storing terms 1.. x + y * z shift (stack on left of dot) 2. x. + y * z reduce (rule 3) 3. E. + y * z shift 4. E +. y * z shift 5. E + y. * z reduce (rule 3) 6. E + E. * z shift 7. E + E *. z shift 8. E + E * z. reduce (rule 3) emit multiply 9. E + E * E. reduce (rule 2) emit add 10.E + E. reduce (rule 1) 11.E. accept» When we have a match on the stack to one of right hand side of productions» replace the match with the left hand side of token! To parse the expression we go in reverse, reduce an expression to a single non terminal, We do this by shift-reduce parsing and use a stack for storing terms 1.. x + y * z shift (stack on left of dot) 2. x. + y * z reduce (rule 3) 3. E. + y * z shift 4. E +. y * z shift 5. E + y. * z reduce (rule 3) 6. E + E. * z shift 7. E + E *. z shift 8. E + E * z. reduce (rule 3) emit multiply 9. E + E * E. reduce (rule 2) emit add 10.E + E. reduce (rule 1) 11.E. accept shift reduce conflict at step 6 ambiguous grammar Ambiguity! Ambiguity can be resolved by» rewriting the grammar or» indicate which operator has precedence (yacc)! Reduce Reduce conflict E! T E! id T! id» Either reduces to E or to T yacc will always use the first rule listed.! For shift-reduce conflicts yacc will always shift Maria Hybinette, UGA 57