CS 11 Ocaml track: lecture 6

Similar documents
CS 11 Ocaml track: lecture 7

ASTs, Objective CAML, and Ocamlyacc

EDAN65: Compilers, Lecture 06 A LR parsing. Görel Hedin Revised:

Parser Tools: lex and yacc-style Parsing

n Parsing function takes a lexing function n Returns semantic attribute of 10/27/16 2 n Contains arbitrary Ocaml code n May be omitted 10/27/16 4

ML 4 A Lexer for OCaml s Type System

CS 4240: Compilers and Interpreters Project Phase 1: Scanner and Parser Due Date: October 4 th 2015 (11:59 pm) (via T-square)

MP 3 A Lexer for MiniJava

Parser Tools: lex and yacc-style Parsing

Last Time. What do we want? When do we want it? An AST. Now!

Programming Languages and Compilers (CS 421)

Programming Languages & Compilers. Programming Languages and Compilers (CS 421) I. Major Phases of a Compiler. Programming Languages & Compilers

Compilation 2013 Parser Generators, Conflict Management, and ML-Yacc

Programming Languages & Compilers. Programming Languages and Compilers (CS 421) Programming Languages & Compilers. Major Phases of a Compiler

MP 3 A Lexer for MiniJava

Topic 5: Syntax Analysis III

Programming Languages and Compilers (CS 421)

n (0 1)*1 n a*b(a*) n ((01) (10))* n You tell me n Regular expressions (equivalently, regular 10/20/ /20/16 4

COMP-421 Compiler Design. Presented by Dr Ioanna Dionysiou

Topic 3: Syntax Analysis I

Syntax-Directed Translation. Lecture 14

CSCI 2041: Advanced Language Processing

CSE 401 Midterm Exam Sample Solution 11/4/11

Fall Compiler Principles Lecture 5: Parsing part 4. Roman Manevich Ben-Gurion University

A Simple Syntax-Directed Translator

CS453 : JavaCUP and error recovery. CS453 Shift-reduce Parsing 1

Fall Compiler Principles Lecture 4: Parsing part 3. Roman Manevich Ben-Gurion University of the Negev

Lecture 8: Deterministic Bottom-Up Parsing

Comp 411 Principles of Programming Languages Lecture 3 Parsing. Corky Cartwright January 11, 2019

UNIVERSITY OF COLUMBIA ADL++ Architecture Description Language. Alan Khara 5/14/2014

Programming Languages and Compilers (CS 421)

Programming Languages and Compilers (CS 421)

Syntax Analysis Part IV

COMPILER CONSTRUCTION LAB 2 THE SYMBOL TABLE. Tutorial 2 LABS. PHASES OF A COMPILER Source Program. Lab 2 Symbol table

Lecture 7: Deterministic Bottom-Up Parsing

CS 360 Programming Languages Interpreters

Parsing III. CS434 Lecture 8 Spring 2005 Department of Computer Science University of Alabama Joel Jones

PLT 4115 LRM: JaTesté

CSCI312 Principles of Programming Languages!

Syntax-Directed Translation

A clarification on terminology: Recognizer: accepts or rejects strings in a language. Parser: recognizes and generates parse trees (imminent topic)

Lecture 14: Parser Conflicts, Using Ambiguity, Error Recovery. Last modified: Mon Feb 23 10:05: CS164: Lecture #14 1

Defining syntax using CFGs

Action Table for CSX-Lite. LALR Parser Driver. Example of LALR(1) Parsing. GoTo Table for CSX-Lite

CMSC 350: COMPILER DESIGN

Bottom-Up Parsing. Lecture 11-12

Context-free grammars

CSCE 314 Programming Languages. Type System

Simple LR (SLR) LR(0) Drawbacks LR(1) SLR Parse. LR(1) Start State and Reduce. LR(1) Items 10/3/2012

CS 415 Midterm Exam Spring 2002

1 Lexical Considerations

flex is not a bad tool to use for doing modest text transformations and for programs that collect statistics on input.

Context-Free Grammar. Concepts Introduced in Chapter 2. Parse Trees. Example Grammar and Derivation

FROWN An LALR(k) Parser Generator

Syntax Intro and Overview. Syntax

CPS 506 Comparative Programming Languages. Syntax Specification

B The SLLGEN Parsing System

Regular Expressions. Agenda for Today. Grammar for a Tiny Language. Programming Language Specifications

Bottom-Up Parsing. Lecture 11-12

Compilers. Bottom-up Parsing. (original slides by Sam

Building a Parser Part III

COMP 181. Prelude. Prelude. Summary of parsing. A Hierarchy of Grammar Classes. More power? Syntax-directed translation. Analysis

Programs as data Parsing cont d; first-order functional language, type checking

Syntax and Grammars 1 / 21

CS 536 Midterm Exam Spring 2013

TML Language Reference Manual

Programming Languages and Compilers (CS 421)

Question Points Score

n <exp>::= 0 1 b<exp> <exp>a n <exp>m<exp> 11/7/ /7/17 4 n Read tokens left to right (L) n Create a rightmost derivation (R)

CMSC 430 Introduction to Compilers. Fall Lexing and Parsing

Project 1: Scheme Pretty-Printer

Compilers and Language Processing Tools

Compiler Construction: Parsing

Error Detection in LALR Parsers. LALR is More Powerful. { b + c = a; } Eof. Expr Expr + id Expr id we can first match an id:

CMSC 330: Organization of Programming Languages

Abstract Syntax. Mooly Sagiv. html://

Compilation 2012 ocamllex and menhir

G53CMP: Lecture 4. Syntactic Analysis: Parser Generators. Henrik Nilsson. University of Nottingham, UK. G53CMP: Lecture 4 p.1/32

Typescript on LLVM Language Reference Manual

Review main idea syntax-directed evaluation and translation. Recall syntax-directed interpretation in recursive descent parsers

How do LL(1) Parsers Build Syntax Trees?

Types and Static Type Checking (Introducing Micro-Haskell)

Lexical Considerations

Part III : Parsing. From Regular to Context-Free Grammars. Deriving a Parser from a Context-Free Grammar. Scanners and Parsers.

Introduction to Lexing and Parsing

The Substitution Model

Lecture 12: Parser-Generating Tools

Example. Programming Languages and Compilers (CS 421) Disambiguating a Grammar. Disambiguating a Grammar. Steps to Grammar Disambiguation

Configuration Sets for CSX- Lite. Parser Action Table

Lab 2. Lexing and Parsing with Flex and Bison - 2 labs

2.2 Syntax Definition

Using an LALR(1) Parser Generator

Programming Language Concepts, cs2104 Lecture 04 ( )

LR Parsing LALR Parser Generators

Ruby: Introduction, Basics

COMP 181. Agenda. Midterm topics. Today: type checking. Purpose of types. Type errors. Type checking

CS143 Handout 20 Summer 2011 July 15 th, 2011 CS143 Practice Midterm and Solution

Context-free grammars (CFG s)

Lecture 9 CIS 341: COMPILERS

CSEP 501 Compilers. Languages, Automata, Regular Expressions & Scanners Hal Perkins Winter /8/ Hal Perkins & UW CSE B-1

Transcription:

CS 11 Ocaml track: lecture 6 n Today: n Writing a computer language n Parser generators n lexers (ocamllex) n parsers (ocamlyacc) n Abstract syntax trees

Problem (1) n We want to implement a computer language n Very complex problem n however, much similarity between implementations of different languages n We can take advantage of this

Problem (2) n We will implement a (very, very) simplified version of the Scheme programming language n hopefully familiar from CS 4 n We call our version of Scheme "bogoscheme" n truth in advertising n file names end in.bs

Problem (3) n Here is the Scheme program we want to be able to run: (define factorial (lambda (n) (if (= n 0) 1 (* n (factorial (- n 1)))))) (print (factorial 10)) n Result should be 3628800

Problem (4) n Two basic ways to implement languages: n interpreter n compiler n Difference?

Interpreters and compilers (1) n A compiler takes a program (as a file or files of text) and generates a machine-language executable file (or files) from it n An interpreter takes a program (as a file or files of text), converts it to some internal representation, and executes it immediately

Interpreters and compilers (2) n Some language processors are intermediate between interpreters and compilers n e.g. Java n Compiler converts source code to Java bytecode n Interpreter interprets bytecode n JIT compiler can compile bytecode to machine language "on the fly"

Building a compiler n Compilers have many stages n Start with source code n usually in text files n Go through a number of transformations n Eventually output machine code n which can be executed directly

Stages of a compiler n Typical compiler stages: n Lexer: converts input file (considered as a string) into a sequence of tokens n Parser: converts sequence of tokens into an "abstract syntax tree" (AST) n [many other stages] n Code generator: emits machine language code

Stages of a typical interpreter n Typical interpreter stages: n Lexer: converts input file (considered as a string) into a sequence of tokens n Parser: converts sequence of tokens into an "abstract syntax tree" (AST) n Evaluator: evaluates AST directly

Stages of a Scheme interpreter n Similar to typical interpreter, with one extra stage n Lexer: converts input file (considered as a string) into a sequence of tokens n Parser: converts sequence of tokens into sequence of "S-expressions" n Convert S-expressions into AST n Evaluator: evaluates AST directly n This is subject of labs 5 and 6

Stages of a simple interpreter n Lexer: converts input file (considered as a string) into a sequence of tokens n Parser: recognizes sequences of tokens and executes them directly n Only useful for very simple languages e.g. calculators n Will show an example later

Lexer (1) n A "lexer" ("lexical analyzer") is the first stage of interpretation or compilation n Takes raw source code and recognizes syntactically meaningful "tokens" n Also throws out unnecessary stuff n n comments whitespace

Lexer (2) n What is a token? n Simple literal data values n integers, booleans, etc. n Punctuation n e.g. left, right parentheses n Identifiers n names of functions, variables n Keywords, operators (if any) n we won't need this

Lexer (3) n Input: (define x 10) ; set x to 10 n Possible output of lexer TOK_LPAREN, TOK_ID("define"), TOK_ID("x"), TOK_INT(10), TOK_RPAREN n NOTE: whitespace and comments are thrown away n Sequence of tokens will be handed off to parser

Lexer (4) n How do we recognize tokens? n Could write long, boring string-recognizing program by hand n More modern approach: use regular expressions which match tokens of interest n Each token type gets its own regular expression (also called a regexp)

Regular expressions (1) n Regexps are a way to identify a class of strings n Simplest regexps: "foo" à recognizes literal string "foo" only n Or fixed characters: '(' à recognizes left parenthesis only n Or an arbitrary character: _ à matches any single character n Or the end-of-file character: EOF à matches end-of-file character

Regular expressions (2) n Regexps can also match multiples of other regexps: regexp * à matches zero or more occurrences of regexp "foo" * à matches "", "foo", "foofoo",... regexp + à matches one or more occurrences of regexp regexp? à matches zero or one occurrence of regexp

Regular expressions (3) n Regexps can also match a sequence of smaller regexps: regexp1 regexp2 à matches regexp1 followed by regexp2 n Can also match any character in a set: ['a' 'b' 'c'] à match any of 'a' 'b' 'c' ['a' - 'z'] à match any char from 'a' to 'z' [^'a' 'b' 'c'] à match any char except 'a', 'b', 'c'

Regular expressions (4) n "Or patterns": regexp1 regexp2 à matches either regexp1 or regexp2 n Some other regexp patterns as well n See ocamllex manual for full list n NOTE: regexp syntax varies between language implementations n though you only need to know Ocaml version

Lexer generators (1) n Writing lexers by hand is boring n Also easily automated n Modern approach: n describe lexer in a high-level specification in a special file n use a special program to convert this lexer into code for the language needed

Lexer generators (2) n In Ocaml: n Write lexer specification in a file ending in ".mll" (ML Lexer) n this is NOT Ocaml code n but it's fairly close n The ocamllex program converts this into Ocaml code (file ending in ".ml") n ".ml" file compiled normally

Lexer generators (3) n Will go through details of ocamllex file format when we go through the example

Parsing Scheme (1) n Most language implementations have a parser which n takes input from output of lexer (i.e. sequence of tokens) n converts into AST (abstract syntax tree) n We will do it slightly differently n Will generate "S-expressions" from tokens n Will convert S-expressions into AST

Parsing Scheme (2) n Advantage of S-expressions n Extremely easy to write the parser! n Almost the simplest possible parser imaginable n Can change AST without having to rewrite the parser n Disadvantage of S-expressions n Also have to write the converter from S-expressions to AST

S-expressions (1) n S-expression stands for "symbolic expression" n Basically a nested list of symbols n Simple, regular format n Very easy to parse

S-expressions (2) n Definition of S-expression: n An S-expression is either n an atom n a list of S-expressions n Note the recursive definition! n S-expressions defined in terms of themselves n Similar to recursive data type defs in Ocaml

Atoms n An "atom" is a single indivisible syntactic entity n Examples: n a boolean n an integer n an identifier n NOT the same as a token n a "left parenthesis" is not an atom

S-expressions (3) n Example of S-expression: n Source code: (define x 10) n S-expression version: LIST[ ATOM["define"] ATOM["x"] ATOM[10] ]

S-expressions (4) n Better S-expression version: Expr_list(Expr_atom(Atom_id("define")), Expr_atom(Atom_id("x")), Expr_atom(Atom_int(10))) n This version can be written as an Ocaml datatype

S-expressions in Ocaml (1) n In file sexpr.mli: type atom = Atom_unit Atom_bool of bool Atom_int of int Atom_id of string

S-expressions in Ocaml (2) n In file sexpr.mli: type expr = Expr_atom of atom Expr_list of expr list n That's all there is to S-expressions! n This is what parser has to generate from a sequence of tokens

S-expr version of our program LIST[ ATOM["define"] ATOM["factorial"] LIST[ ATOM["lambda"] LIST[ ATOM["n"] ] LIST[ ATOM["if"] LIST[ ATOM["="] ATOM["n"] ATOM[0] ] ATOM[1] LIST[ ATOM["*"] ATOM["n"] LIST[ ATOM["factorial"] LIST[ ATOM["-"] ATOM["n"] ATOM[1] ] ] ] ] ] ] LIST[ ATOM["print"] LIST[ ATOM["factorial"] ATOM[10] ] ]

Parser generators (1) n Like lexers, can write parser by hand, but it's extremely boring and error-prone n Instead, have programs called "parser generators" which can do this given a high-level specification of the parser n Ocaml parser generator is called ocamlyacc n yacc originally meant "yet another compiler compiler"

Parser generators (2) n Parser generator specification includes n description of the different token types n their names n the type of any associated data n description of the grammar of the language as a "context free grammar" n Sometimes some other stuff n e.g. operator precedence declarations n We won't need this

Parser generators (3) n Parser generator specification is in a file ending with ".mly" n Stands for "ML Yacc" file n Similar to ocamllex file; has its own format which is different from Ocaml source code n Will see example later

Context-free grammars (1) n High-level description of language syntax n Two elements: n terminals -- correspond to tokens n nonterminals -- (usually) correspond to some kind of expression in the language n One kind of "rule" called a "production" n describes how each nonterminal corresponds to a sequence of other nonterminals and/or terminals

Context-free grammars (2) n There is also an entry point, which is a nonterminal which may represent n an entire program (typical for compilers) n an entire expression (typical for interpreters) n Our entry point will be a single S- expression, or None if EOF token is encountered n type will be Sexpr.expr option

Context-free grammars (3) n Context-free grammars often very close to Ocaml type definitions n which is one reason Ocaml is a nice language to write language interpreters/ compilers in n Will see example of CFG/parser generator in the example later

Abstract Syntax Trees (1) n An Abstract Syntax Tree (AST) is the final goal of parsing n An AST is a representation of the syntax of a program or expression as an Ocaml datatype n Includes everything relevant to interpreting the program n Does not include irrelevant stuff n whitespace, comments, etc.

Abstract Syntax Trees (2) n Our AST is defined in ast.mli: type id = string type expr = Expr_unit Expr_bool of bool Expr_int of int Expr_id of id Expr_define of id * expr Expr_if of expr * expr * expr Expr_lambda of id list * expr list Expr_apply of expr * expr list

Abstract Syntax Trees (3) n The AST defines all valid expressions in the language n First few cases represent expressions consisting of a single data value: type expr = Expr_unit (* unit value *) Expr_bool of bool (* boolean value *) Expr_int of int (* integer *)... n N.B. Expr_unit value is like the Ocaml unit value

Abstract Syntax Trees (4) n Identifiers are just strings: type id = string (* id is a type alias for "string" *) type expr =... Expr_id of id... n Expr_id expressions consist of a single identifier n for instance, the name of a function

Abstract Syntax Trees (5) n "define" expressions: type expr =... Expr_define of id * expr... n id represents the name being defined n expr represents the thing it's defined to be

Abstract Syntax Trees (6) n "if" expressions: type expr =... Expr_if of expr * expr * expr... n if expression consists of three subexpressions n n n test case (always evaluated) "then" case (evaluated if test evaluates to true) "else" case (evaluated if test evaluates to false)

Abstract Syntax Trees (7) n "lambda" expressions type expr =... Expr_lambda of id list * expr list... n lambda expressions represent an anonymous function n id list is the list of formal parameters of the function n expr list is the body of the function

Abstract Syntax Trees (8) n "apply" expressions represent function application type expr =... Expr_apply of expr * expr list n expr represents the function being applied n n could be an identifier could be a lambda expression n expr list represents the arguments of the function application

AST version of our program DEFINE["factorial" LAMBDA[(ID["n"]) IF[APPLY[ID["="] ID["n"] INT[0]] INT[1] APPLY[ID[ "*" ] ID["n"] APPLY[ID["factorial"] APPLY[ID["-"] ID["n"] INT[1]]]]]]] APPLY[ID["print"] APPLY[ID["factorial"] INT[10]]]

Example -- calculator language n We will walk through the "calculator" example from the ocamllex/ocamlyacc documentation n In the process, will see the format of ocamllex/ ocamlyacc files n This example is NOT a typical language n n n we don't generate an AST instead, just execute code directly after parsing nevertheless, principles are the same

parser.mly file, part 1 %token <int> INT %token PLUS MINUS TIMES DIV %token LPAREN RPAREN /* left, right parentheses */ %token EOL /* end of line */ %left PLUS MINUS /* lowest precedence */ %left TIMES DIV /* medium precedence */ %nonassoc UMINUS /* highest precedence */ %start main /* the entry point */ %type <int> main %% token definitions

parser.mly file, part 1 %token <int> INT %token PLUS MINUS TIMES DIV %token LPAREN RPAREN /* left, right parentheses */ %token EOL /* end of line */ %left PLUS MINUS /* lowest precedence */ %left TIMES DIV /* medium precedence */ %nonassoc UMINUS /* highest precedence */ %start main /* the entry point */ %type <int> main %% operators and precedences

parser.mly file, part 1 %token <int> INT %token PLUS MINUS TIMES DIV %token LPAREN RPAREN /* left, right parentheses */ %token EOL /* end of line */ %left PLUS MINUS /* lowest precedence */ %left TIMES DIV /* medium precedence */ %nonassoc UMINUS /* highest precedence */ %start main /* the entry point */ %type <int> main %% entry point

parser.mly file, part 1 %token <int> INT %token PLUS MINUS TIMES DIV %token LPAREN RPAREN /* left, right parentheses */ %token EOL /* end of line */ %left PLUS MINUS /* lowest precedence */ %left TIMES DIV /* medium precedence */ %nonassoc UMINUS /* highest precedence */ %start main /* the entry point */ %type <int> main %% type of entry point

parser.mly file, part 1 %token <int> INT %token PLUS MINUS TIMES DIV %token LPAREN RPAREN /* left, right parentheses */ %token EOL /* end of line */ %left PLUS MINUS /* lowest precedence */ %left TIMES DIV /* medium precedence */ %nonassoc UMINUS /* highest precedence */ %start main /* the entry point */ %type <int> main start of next section %%

parser.mly file, part 2 main: expr EOL { $1 } ; expr: INT { $1 } entry point LPAREN expr RPAREN { $2 } expr PLUS expr { $1 + $3 } expr MINUS expr { $1 - $3 } expr TIMES expr { $1 * $3 } expr DIV expr { $1 / $3 } MINUS expr %prec UMINUS { - $2 } ;

parser.mly file, part 2 main: expr EOL { $1 } ; expr: INT { $1 } nonterminals LPAREN expr RPAREN { $2 } expr PLUS expr { $1 + $3 } expr MINUS expr { $1 - $3 } expr TIMES expr { $1 * $3 } expr DIV expr { $1 / $3 } MINUS expr %prec UMINUS { - $2 } ;

parser.mly file, part 2 main: expr EOL { $1 } ; expr: INT { $1 } LPAREN expr RPAREN { $2 } expr PLUS expr { $1 + $3 } expr MINUS expr { $1 - $3 } expr TIMES expr { $1 * $3 } expr DIV expr { $1 / $3 } terminals MINUS expr %prec UMINUS { - $2 } ;

parser.mly file, part 2 main: expr EOL { $1 } ; expr: INT { $1 } LPAREN expr RPAREN { $2 } expr PLUS expr { $1 + $3 } expr MINUS expr { $1 - $3 } expr TIMES expr { $1 * $3 } expr DIV expr { $1 / $3 } productions MINUS expr %prec UMINUS { - $2 } ;

parser.mly file, part 2 main: expr EOL { $1 } ; expr: INT { $1 } LPAREN expr RPAREN { $2 } expr PLUS expr { $1 + $3 } expr MINUS expr { $1 - $3 } expr TIMES expr { $1 * $3 } expr DIV expr { $1 / $3 } actions MINUS expr %prec UMINUS { - $2 } ;

parser.mly file, part 2 main: expr EOL { $1 } ; expr: INT { $1 } precedence specifier LPAREN expr RPAREN { $2 } expr PLUS expr { $1 + $3 } expr MINUS expr { $1 - $3 } expr TIMES expr { $1 * $3 } expr DIV expr { $1 / $3 } MINUS expr %prec UMINUS { - $2 } ;

How parsing works n Parser calls lexer to get tokens one at a time n Parser checks to see if the left-hand side of a production (before the colon) can be matched n if so, it executes the corresponding action (which is (almost) Ocaml code) n if not, it pushes the token onto a stack

Shifting and reducing (1) n Two fundamental actions of the parser: shifting and reducing n Shifting: n putting a new token onto the stack n Reducing: n n n popping off all the tokens on the stack corresponding to the RHS of a given production pushing the LHS of the production onto the stack n along with its associated value executing the action of that production

Shifting and reducing (2) n Can have cases where grammar is ambiguous n Ambiguity à when rules allow n n more than one different reduction (called a reduce/reduce conflict) either a shift or a reduction (called a shift/reduce conflict) n Shift/reduce conflicts are resolved by choosing the shift n Reduce/reduce conflicts are unresolvable n means grammar is completely broken

Parsing actions n Parsing actions are Ocaml code inside curly brackets n $1, $2, $3 values represent the value associated with the corresponding location in the RHS of the production expr PLUS expr { $1 + $3 } n Here, $1 represents the value of the first expr n $3 represents the value of the second expr n $2 would represent value of PLUS token n meaningless (PLUS token has no value)

Example: 2 + 2 = 4 n Input: 2 + 2 <return> n Tokens: INT(2) PLUS INT(2) EOL n Parser n shifts INT(2) n reduces INT(2) to expr with value 2 n n shifts PLUS shifts INT(2) n reduces INT(2) to expr with value 2 n reduces expr PLUS expr to expr with value 4 n shifts EOL n reduces expr EOL to main with value 4

Note n In more realistic language (like in lab 5) would not compute results inside parser actions n Instead, would generate AST n For our example, might have e.g. expr PLUS expr { Add_expr($1, $3) } n (Assuming that Add_expr is one constructor for calculator AST) n Could evaluate AST later n Complex languages need AST to be interpreted correctly

lexer.mll, part 1 { } open Parser exception Eof "Header" (* Continued on next slide *) n n Header is just code that gets copied to the front of the.ml file that gets generated Usually just some declarations (can be empty)

lexer.mll, part 2 rule token = parse Name of lexer [' ' '\t'] { token lexbuf } (* skip blanks *) ['\n' ] { EOL } ['0'-'9']+ as lxm { INT(int_of_string lxm) } '+' { PLUS } '-' { MINUS } '*' { TIMES } '/' { DIV } '(' { LPAREN } ')' { RPAREN } eof { raise Eof }

lexer.mll, part 2 rule token = parse regular expressions [' ' '\t'] { token lexbuf } (* skip blanks *) ['\n' ] { EOL } ['0'-'9']+ as lxm { INT(int_of_string lxm) } '+' { PLUS } '-' { MINUS } '*' { TIMES } '/' { DIV } '(' { LPAREN } ')' { RPAREN } eof { raise Eof }

lexer.mll, part 2 rule token = parse [' ' '\t'] { token lexbuf } (* skip blanks *) ['\n' ] { EOL } ['0'-'9']+ as lxm { INT(int_of_string lxm) } '+' { PLUS } '-' { MINUS } '*' { TIMES } '/' { DIV } '(' { LPAREN } ')' { RPAREN } eof { raise Eof } actions

Strategy for writing lab 5 (1) n Don't worry about S-expression to AST conversion at first n Write parser with dummy actions n make sure ocamlyacc doesn't give any errors n Then write lexer (should be easy) n make sure ocamllex doesn't give any errors n Compile lexer_test and parser_test programs n Test them on factorial.bs input file

Strategy for writing lab 5 (2) n Once parser is working, work on Sexpr to AST conversion (in file ast.ml) n This is the hardest part of the lab n have to handle lots of different cases n USE PATTERN MATCHING! n Code should work on ANY valid input, not just on factorial.bs file

Next time n We'll go over lab 6, which is the rest of the mini-scheme language implementation