[Lexical Analysis] Bikash Balami

Size: px
Start display at page:

Download "[Lexical Analysis] Bikash Balami"

Transcription

1 1 [Lexical Analysis] Compiler Design and Construction (CSc 352) Compiled By Central Department of Computer Science and Information Technology (CDCSIT) Tribhuvan University, Kirtipur Kathmandu, Nepal

2 2 Lexical Analysis This is the initial part of reading and analyzing the program text. The text is read and divided into tokens, each of which corresponds to a symbol in a programming language, e.g. a variable name, keyword or number etc. So a lexical analyzer or lexer, will as its input take a string of individual letters and divide this string into tokens. It will discard comments and whitespace (i.e. blanks and newlines). Overview of Lexical Analysis A lexical analyzer, also called a scanner, typically has the following functionality and characteristics. Its primary function is to convert from a sequence of characters into a sequence of tokens. This means less work for subsequent phases of the compiler. The scanner must Identify and Categorize specific character sequences into tokens. It must know whether every two adjacent characters in the file belong together in the same token, or whether the second character must be in a different token. Most lexical analyzers discard comments & whitespace. In most languages these characters serve to separate tokens from each other. Handle lexical errors (illegal characters, malformed tokens) by reporting them intelligibly to the user. Efficiency is crucial; a scanner may perform elaborate input buffering Token categories can be (precisely, formally) specified using regular expressions, e.g. IDENTIFIER=[a-zA-Z][a-zA-Z0-9]* Lexical Analyzers can be written by hand, or implemented automatically using finite automata.

3 3 Role of Lexical Analyzer S o u r c e C o d e L e x i c a l A n a l y z e r t o k e n g e t n e x t t o k e n ( ) P a r s e r S y m b o l T a b l e Figure:- Interaction of lexical analyzer with parser The lexical analyzer works in lock step with the parser. The parser requests the lexical analyzer for the next token whenever it requires one using getnexttoken(). Lexical analyzer may also perform other auxiliary operations like removing redundant white spaces, removing token separators (like semicolon ;) etc. Some other operations performed by lexer, includes removal of comments, providing line number to the parser for error reporting. The Function of a lexical Analyzer is to read the input stream representing the Source program, one character at a time and to translate it into valid tokens. Issues in Lexical Analysis There are several reasons for separating the analysis phase of compiling into lexical analysis and parsing. 1) Simpler design is the most important consideration. The separation of lexical analysis from syntax analysis often allows us to simplify one or the other of these phases. 2) Compiler efficiency is improved. 3) Compiler portability is enhanced. Tokens, Patterns, Lexemes In compiler, a token is a single word of source code input. Tokens are the separately identifiable block with collective meaning. When a string representing a program is broken into sequence of substrings, such that each substring represents a constant, identifier, operator, keyword etc of the language, these substrings are called the tokens of the Language. They are the building block of the programming language. E.g. if, else, identifiers etc.

4 4 Lexemes are the actual string matched as token. They are the specific characters that make up of a token. For example abc and 123. A token can represent more than one lexeme. i.e. token intnum can represent lexemes 123, 244, 4545 etc. Patterns are rules describing the set of lexemes belonging to a token. This is usually the regular expression. E.g. intnum token can be defined as [0-9][0-9]*. Attribute of tokens When a token represents more than one lexeme, lexical analyzer must provide additional information about the particular lexeme. i.e. In case of more than one lexeme for a token, we need to put extra information about the token. This additional information is called the attribute of the token. For e.g. token id matched var1, var2 both, here in this case lexical analyzer must be able to represent var1, and var2 as different identifiers. Example: take statement, area = * r * r 1. getnexttoken() returns (id, attr) where attr is pointer to area to symbol table 2. getnexttoken() returns (assign) no attribute is needed, if there is only one assignment operator 3. getnexttoken() returns (floatnum, ) where is the actual value of floatnum. So on. Token type and its attribute uniquely identify a lexeme. Lexical Errors Though error at lexical analysis is normally not common, there is possibility of errors. When the error occurs the lexical analyzer must not halt the process. So it can print the error message and continue. Error in this phase is found when there are no matching string found as given by the pattern. Some error recovery techniques includes like deletion of extraneous character, inserting missing character, replacing incorrect character by correct one, transposition of adjacent characters etc. Lexical error recovery is normally expensive process. For e.g. finding the number of transformation that would make the correct tokens.

5 5 Implementing Lexical Analyzer (Approaches) 1. Use lexical analyzer generator like flex that produces lexical analyzer from the given specification as regular expression. We do not take care about reading and buffering the input. 2. Write a lexical analyzer in general programming language like C. We need to use the I/O facility of the language for reading and buffering the input. 3. Use the assembly language to write the lexical analyzer. Explicitly mange the reading if input. The above strategies are in increasing order of difficulty, however efficiency may be achieved and as a matter of fact since we deal with characters in lexical analysis, it is better to take some time to get efficient result. Look Ahead and Buffering Most of the time recognizing tokens needs to look ahead several other characters after the matched lexeme before the token match is returned. For e.g. int is a keyword in C but integer is an identifier so when the scanner reads i, n, t then this time it has to look for other characters to see whether it is just int or some other word. In this case at next time we need move back to rescan the input again for the characters that are not used for the lexeme and this is time consuming. To reduce the overhead, and efficiently move back and forth input buffering technique is used. Input Buffering We will consider look ahead with 2N buffering and using the sentinels. N N f lo a t x, y, z ; \ n.. p r in t f ( h e llo w o r l d ) ;... e o f l e x e m e p o in t e r f o r w a r d p o in t e r Figure:- An input buffer in two halves We divide the buffer into two halves with N-characters each. normally N is the number if characters in one disk block like 1024 or Rather than reading character by character from

6 6 file we read N input character at once. If there are fewer than N character in input eof marker is placed. There are two pointers (see fig in previous slide). The portion between lexeme pointer and forward pointer is current lexeme. Once the match for pattern is found both the pointers points at same place and forward pointer is moved. This method has limited look ahead so may not work all the time say multi line comment in C. In this approach we must check whether end of one half is reached (two test each time if forward pointer is not at end of halves) or not each time the forward pointer is moved. The number of such tests can be reduced if we place sentinel character at the end of each half. N N f lo a t x, y, z ;.. \ n p r in e o f t f ( h e llo w o r ld ) ;... e o f le x e m e p o in t e r f o r w a r d p o in t e r Figure:- Sentinels at end of each buffer half if forward points end of first half reload second half forward++; else if forward points end of second half reload first half forward = start of first half else forward++ Figure:- Code to advance forward pointer(first scheme) forward++ if forward points eof if forward points end of first half reload second half forward++; else if forward points end of second half reload first half

7 7 else forward = start of first half terminate lexical analysis Figure:- Lookahead code with sentinels Specifications of Tokens Regular expressions are an important notation for specifying patterns. Each pattern matches a set of strings, so regular expressions will serve as names for sets of strings. Some Definitions Alphabets o An alphabet A is a set of symbols that generate languages. For e.g. {0-9} is an alphabet that is used to produce all the non negative integer numbers. {0-1} is an alphabet that is used to produce all the binary strings. String o A string is a finite sequence of characters from the alphabet A Given the alphabet A, A 2 = A.A is set of strings of length 2, similarly A n is set of strings of length n. The length of the string w is denoted by w i.e. numbet of characters (symbols) in w.we also have A 0 ={ε}, where ε is called empty string. s denotes the length of the string s. Kleene Closure o Kleene closure of an alphabet A denoted by A* is set of all strings of any length (0 also) possible from A. Mathematically A* = A 0 A 1 A 2 For any string, w over alphabet A, w A*. A language L over alphabet A is the set such that L A*. The string s is called prefix of w, if the string s is obtained by removing zero or more trailing characters from w. If s is a proper prefix, then s w. The string s is called suffix of w, if the string s is obtained by deleting zero or more leading characters from w. we say s as proper suffix if s w. The string s is called substring of w if we can obtain s by deleting zero or more leading or trailing characters from w. We say s as proper substring if s w. Regular Operators

8 8 o The following operators are called regular operators and the language formed called regular language. Regular Expression (RE) Basis Symbol. Concatenation operator, R.S = {rs r R and s S }. * * Kleene * operator, A +/ / Choice/union operator, R S = {t t R or t S }. We use regular expression to describe the tokens of a programming language. ε is a regular expression denoting language { ε } a A is a regular expression denoting {a} If r and s are regular expressions denoting languages L 1 (r) and L 2 (s) respectively, then r + s is a regular expression denoting L 1 (r) L 2 (s) rs is a regular expression denoting L 1 (r) L 2 (s) r* is a regular expression denoting (L 1 (r) )* (r) is a regular expression denoting L 1 (r) Examples (1+0)*0 is RE that gives the binary strings that are divisible by 2. 0*10*+0*110* is RE that gives binary strings having at most 2 1s. (1+0)*00 denotes the language of all strings that ends with 00 (binary number multiple of 4) (01)* + (10)* denotes the set of all strings that describes alternating 1s and 0s Properties of RE r+s = s+r ( + is commutative) r+(s+t) = (r+s)+t ; r(st) = (rs)t (+ and. are associative) r(s+t) = (rs)+(rt); (r+s)t =(rt)+st) (. distributes over +) εr = rε (ε is identity element) r* = (r+ε)* (relation between * and ε) r** = r* (* is idempotent) =U i=1 i A

9 9 Regular Definitions To write regular expression for some languages can be difficult, because their regular expressions can be quite complex. In those cases, we may use regular definitions. i.e defining RE giving a name and reusing it as basic symbol to produce another RE, gives the regular definition. Example d 1 r 1, d 2 r 2,, d n r n, Where each d i s are distinct name and r i s are REs over the alphabet A {d 1, d 2, d i-1} In C the RE for identifiers can be written using the regular definition as o letter a + b +. +z + A + B + + Z. o digit o identifier (letter + _)(letter + digit + _)* [Note: Remember recursive regular definition may not produce RE, that means digits digits digits digits is wrong!!! Recognition of Tokens A recognizer for a language is a program that takes a string w, and answers YES if w is a sentence of that language, otherwise NO. The tokens that are specified using RE are recognized by using transition diagram or finite automata (FA). Starting from the start state we follow the transition defined. If the transition leads to the accepting state, then the token is matched and hence the lexeme is returned, otherwise other transition diagrams are tried out until we process all the transition diagram or the failure is detected. Recognizer of tokens takes the language L and the string s as input and try to verify whether s L or not. There are two types of Finite Automata. 1. Deterministic Finite Automaton (DFA) 2. Non Deterministic Finite Automaton (NFA) Deterministic Finite Automaton (DFA) FA is deterministic, if there is exactly one transition for each (state, input) pair. It is faster recognizer but it make take more spaces. DFA is a five tuple (S, A, S 0, δ, F) where,

10 10 S finite set of states A finite set of input alphabets S 0 starting state δ transition function i.e. δ:s A S F set of final states F S Implementing DFA The following is the algorithm for simulating DFA for recognizing given string. For a given string w, in DFA D, with start state q 0, the output is YES, if D accepts w, otherwise NO. DFASim(D, q 0 ) { q = q 0 ; c = getchar(); while (c!= eof) { q = move (q, c); //this is δ function. c = getchar(); } if (s is in H) return yes ; // if D accepts s else return false ; } Figure:- Simulating DFA

11 11 Non Deterministic Automata (NFA) FA is non deterministic, if there is more than one transition for each (state, input) pair. It is slower recognizer but it make take less spaces. An NFA is a five tuple (S, A, S 0, δ, F) where, S finite set of states A finite set of input alphabets S 0 starting state δ transition function i.e. δ:s A 2 S F set of final states F S Examples 1,0 S0 1 S1 0 S3 0 1,0 S2 The above state machine is an NFA having a non deterministic transition at state S1. On reading 0 it may either stay in S1 or goto S3 (accept). It s regular expression is 1(1 + 0)*0.

12 12 The above figure shows a state machine with ε moves that is equivalent to the regular expression (10)* + (01)*. The upper half of the a machine recognizes (10)* and the lower half recognizes (01)*. The machine non deterministically moves to the upper half of the lower half when started from state s0. Algorithm S = ε-closure({s 0 }) // set of all states that can be accessed from S 0 by ε-transitions c = getchar() while(c!= eof) { S = ε-closure(move(s,c)) //set of all states that can be accessible from a state in S by a transition on c } c = getchar() if ( S F φ) then else return YES return NO Figure :- Simulating NFA

13 13 Reducibility 1. NFA to DFA 2. RE to NFA 3. RE to DFA NFA to DFA (sub set construction) This sub set construction is an approach for an algorithm that constructs DFA from NFA, that recognizes the same language. Here there may be several accepting states in a given subset of nondeterministic states. The accepting state corresponding to the pattern listed first in the lexical analyser generator specification has priority. Here also state transitions are made until a state is reached which has no next state for the current input symbol. The last input position at which the DFA entered an accepting state gives the lexeme. We need the following operations. ε-closure(s) the set of NFA states reachable from NFA state S on ε-transition ε-closure(t) the set of NFA states reachable from some NFA states S in T on ε- transition Move(T,a) the set of NFA states to which there is a transition on input symbol a from NFA state S in T. Subset Construction Algorithm Put ε-closure(s 0 ) as an unmarked state in DStates While there is an unmarked state T in DStates do Mark T For each input symbol a A do U = ε-closure(move(t,a)) If U is not in DStates then Add U as an unmarked state to DStates End if DTran[T, a] = U End do End do

14 14 DStates is the set of states of the new DFA consisting of sets of states of the NFA DTran is the transition table of the new DFA A set of DStates is an accepting state of DFA if it is a set of NFA states containing at least one accepting state of NFA The start state of DFA is ε-closure(s 0 ) Example ε 0 1 a ε 2 3 ε ε 6 ε 7 a 8 4 b 5 ε S 0 = ε-closure({0}) = {0, 1, 2, 7} ε S 0 into DStates as an unmarked state Mark S 0 ε-closure(move(s 0,a)) = ε-closure({3, 8}) = {1, 2, 3, 4, 6, 7, 8} = S 1 S 1 into DStates as an unmarked state ε-closure(move(s 0,b)) = ε-closure({5}) = {1, 2, 4, 5, 6, 7} = S 2 S 2 into DStates as an unmarked state DTran[S 0, a] S 1 DTran[S 0, b] S 2 Mark S 1 ε-closure(move(s 1,a)) = ε-closure({3, 8}) = {1, 2, 3, 4, 6, 7, 8} = S 1 ε-closure(move(s 1,b)) = ε-closure({5}) = {1, 2, 4, 5, 6, 7} = S 2 DTran[S 1, a] S 1 DTran[S 1, b] S 2 Mark S2 ε-closure(move(s 2,a)) = ε-closure({3, 8}) = {1, 2, 3, 4, 6, 7, 8} = S 1 ε-closure(move(s 2,b)) = ε-closure({5}) = {1, 2, 4, 5, 6, 7} = S 2 DTran[S 2, a] S 1 DTran[S 2, b] S 2

15 15 S 0 is the start of DFA, since 0 is a member of S 0 = {0, 1, 2, 4, 7} S 1 is an accepting state of DFA, since 8 is a member of S 1 = {1, 2, 3, 4, 6, 7, 8} Now equivalent DFA is a S1 a S0 b b a S0 RE to NFA (Thomson s Construction) Input RE, r, over alphabet A Output ε-nfa accepting L(r) Procedure process in bottom up manner by creating ε-nfa for each symbol in A including ε. Then recursively create for other operations as shown below 1. For a in A and ε, both are RE and constructed as ε a i f i f 2. For REs r and s, r.s is RE too and it is constructed as i.e. The start of r, becomes the start of r.s and final state of s becomes the final state of r.s 3. For REs r and s, r+s is RE too and it is constructed as

16 16 4. For REs r, r* is RE too and it is constructed as For REs (r), ε-nfa((r)) is ε-nfa(r) ε-nfa(r) has at most twice as many state as number of symbols and operators in r since each construction takes a tmost two new states. ε-nfa(r) has exactly one start and one accepting state. Each state of ε-nfa(r) has either one outgoing transition on a symbol in A or at most two outgoing ε-transitions Example (a + b)*a To NFA For a For b A b For a + b a ε ε ε b ε

17 17 For (a + b)* a ε ε ε ε ε ε b ε For (a + b)*a ε a ε ε ε ε a ε b ε RE to DFA Important States The state s in ε -NFA is an important state if it has no null transition. In optimal state machine all states are important states Augmented Regular Expression ε -NFA created from RE has exactly one accepting state and accepting state is not important state since there is no transition so by adding special symbol # on the RE at the rightmost position, we can make the accepting state as an important state that has transition on #. Now the accepting state is there in optimal state machine. The RE (r)# is called augmented RE. Procedure 1. Convert the given expression to augmented expression by concatenating it with # i.e (r) (r)#. 2. Construct the syntax tree of this augmented regular expression. In this tree, all the operators will be inner nodes and all the alphabet, symbols including # will be leaves. 3. Numbered each leaves.

18 18 4. Traverse tree to construct nullable, firstpos, lastpos and followpos. 5. Construct the DFA from so obtained followpos. Some Definitions o nullable(n): If the subtree rooted at n can have valid string as ε, then nullable(n) is true, false otherwise. o firstpos(n): The set of all the positions that can be the first symbol of the substring rooted at n. o lastpos(n): The set of all the positions that can be the last symbol of the substring rooted at n. o followpos(i): The set of all positions that can follow i for valid string of regular expression. We calculate follwopos, we need above three functions. Rule to evaluate nullable, firstpos and lastpos Node- n nullable(n) firstpos(n) lastpos(n) leaf labeled ε True {} {} non null leaf position i False {i} {i} nullable(a) firstpos(a) firstpos(b) lastpos(a) lastpos(b) or nullable(b) a b nullable(a) and if nullable(a) is true then firstpos(a) firstpos(b) if nullable(b) is true then lastpos(a) lastpos(b) a b nulllable(b) else firstpos(a) else lastpos(b) * True firstpos(a) lastpos(a) a

19 19 Computation of followpos Algorithm for each node n in the tree do if n is a cat-node with left child c 1 and right child c 2 then for each i in lastpos(c 1 ) do followpos(i) = followpos(i) firstpos(c 2 ) end do else if n is a star-node then for each i in lastpos(n) do followpos(i) = followpos(i) firstpos(n) end do end if end do Algorithm to create DFA from RE 1. Create a syntax tree of (r)# 2. Evalauate the functions nullable, firstpos, lastpos ad followpos 3. Start state of DFA = S = S 0 = firspos(r), where r is the root of the tree, as an unmarked state 4. While there is unmarked state T in the state of DFA Mark T for each input symbol a in A do Let s 1, s 2,, s n are positions in S and symbols in those positions is a S followpos(s 1 ) followpos(s n ) Move(S,a) S if (S is not empty and not in the states of DFA) Put S into states of DFA and unmarked it end if end do end do

20 20 Note: The start state of resulting DFA is firstpos(root) The final states of DFA are all states containing the position of #. Example Construct DFA from RE a(a + b)*bba# Create a syntax tree Calculate nullable, firstpos and lastpos

21 21 Calculate followpos Using rules we get followpos(1): {2, 3, 4} followpos(2): {2, 3, 4} followpos(3): {2, 3, 4} followpos(4): {5} followpos(5): {6} followpos(6): {7} followpos(7): - Now, Starting state = S1 = firstpos(root) = {1} Mark S1 For a : followpos(1) = {2, 3, 4} = S2 For b : φ Mark S2 For a : followpos(2) = {2, 3, 4} = S2 (because among {2, 3, 4}, position of a is 2) For b : followpos(3) followpos(4) = {2, 3, 4, 5} = S3 (because among {2, 3, 4}, position of b is 3 and 4) Mark S3 For a : followpos(2) = {2, 3, 4} = S2 For b : followpos(3) followpos(4) followpos(5) = {2, 3, 4, 5, 6} = S4 Mark S4 For a : followpos(2) followpos(6) = {2, 3, 4, 7} = S5 (since it contains 7 i.e. #, so it is accepting state) For b : followpos(3) followpos(4) followpos(5) = {2, 3, 4, 5, 6} = S4 Mark S5 For a : followpos(2) = {2, 3, 4} = S2 For b : followpos(3) followpos(4) = {2, 3, 4, 5} = S3

22 22 Now no new states so stop. Now the DFA is b b a a b b a S1 S2 S3 S4 S5 a a State Minimization in DFA DFA minimization refers to the task of transforming a given deterministic finite automaton (DFA) into an equivalent DFA which has minimum number of states. Two states p and q are called equivalent if for all input string s, δ(p, w) is an accepting state iff δ(q, w) is an accepting state., otherwise distinguishable states. We say that, string w distinguishes state s from state t if, by starting with DFA M in state s and feeding it input w, we end up in an accepting state, but starting in state t and feeding it with same input w, we end up in a non accepting state, or vice versa. It finds the states that can be distinguished by some input string. Each group of states that cannot be distinguished is then merged into a single state. Procedure 1. So partition the set of states into two partition a) set of accepting states and b) set of nonaccepting states. 2. Split the partition on the basis of distinguishable states and put equivalent states in a group 3. To split we process the transition from the states in a group with all input symbols. If the transition on any input from the states in a group is on different group of states then they are distinguishable so remove those states from the current partition and create groups. 4. Process until all the partition contains equivalent states only or have single state.

23 23 Example Optimize the DFA Partition 1: {{a, b, c, d, e}, {f}} with input 0; a b and b d, c b, d e all transition in same group. with input 1; e f (different group) so e is distinguishable from others. Partition 2: {{a, b, c, d}, {e}, {f}} with input 0; d e (different group). Partition 3: {{a, b, c}, {d}, {e}, {f}} with input 0; b d (Watch!!!) Partition 4: {{a, c}, {b}, {d}, {e}, {f}} with both 0 and 1 a, c b so no split is possible here, a and c are equivalent. Space Time Tradeoffs: NFA Vs DFA Given the RE r and the input string s to determine whether s is in L(r) we can either construct NFA and test or we can construct DFA and test for s after NFA is constructed from r. ε- NFA (for NFA only constant time differs) Space complexity: O( r ) (at most twice the number of symbols and operators in r).

24 24 Time complexity: O( r * s ) (test s, there may be O( s ) test for each possible transition). DFA Space complexity: O(2 r ) (ε- NFA construction and then subset construction). Time complexity: O( s ) (single transition so linear test for s). If we can create DFA from RE by avoiding transition table, then we can improve the performance

25 25 Exercises 1. Given alphabet A = {0, 1}, write the regular expression for the following a. Strings that can either have sub strings 001 or 100 b. String where no two 1s occurs consecutively c. String which have an odd numbers of 0s d. String which have an odd number of 0s and even number of 1s e. String that have at most 2 0s f. String that have at least 3 1s g. String that have at most two 0s and at least three 1s 2. Write regular definition for specifying integer number, floating number, integer array declaration in C. 3. Convert the following regular expression first into NFA and DFA. a. 0 + (1 + 0)*00 b. zero 0 one 1 bit zero + one bits bit* 4. Write an algorithm for computing ε-closure(s) of any state s in NFA. 5. Converse the following RE to DFA a. (a + b)*a b. (a + ε) b c * 6. Describe the languages denoted by the following RE a. 0 (0 + 1) * 0 b. ((ε + 0) 1 * ) * c. (0 + 1)*0(0 + 1)(0 + 1) d. 0*10*10*10* e. ( )* (( ) ( )* ( ) ( )*)* 7. Construct NFA from following RE and trace the algorithm for ababbab a. (a + b)* b. (a* + b*)* c. ((ε + a) b*)*)

26 26 d. (a + b)* abb(a + b)* 8. Show that following RE are equivalent by showing their minimum state DFA a. (a + b)* b. (a* + b*)* c. ((ε + a)b*)* Bibliography Aho, R. Sethi, J. Ullman, Compilers: Principles, Techniques, and Tools Notes from CDCSIT, TU of Samujjwal Bhandari and Bishnu Gautam Manuals of Srinath Srinivasa, Indian Institute of Information Technology, Bangalore

CS308 Compiler Principles Lexical Analyzer Li Jiang

CS308 Compiler Principles Lexical Analyzer Li Jiang CS308 Lexical Analyzer Li Jiang Department of Computer Science and Engineering Shanghai Jiao Tong University Content: Outline Basic concepts: pattern, lexeme, and token. Operations on languages, and regular

More information

Module 6 Lexical Phase - RE to DFA

Module 6 Lexical Phase - RE to DFA Module 6 Lexical Phase - RE to DFA The objective of this module is to construct a minimized DFA from a regular expression. A NFA is typically easier to construct but string matching with a NFA is slower.

More information

UNIT II LEXICAL ANALYSIS

UNIT II LEXICAL ANALYSIS UNIT II LEXICAL ANALYSIS 2 Marks 1. What are the issues in lexical analysis? Simpler design Compiler efficiency is improved Compiler portability is enhanced. 2. Define patterns/lexeme/tokens? This set

More information

Dixita Kagathara Page 1

Dixita Kagathara Page 1 2014 Sem - VII Lexical Analysis 1) Role of lexical analysis and its issues. The lexical analyzer is the first phase of compiler. Its main task is to read the input characters and produce as output a sequence

More information

COMP-421 Compiler Design. Presented by Dr Ioanna Dionysiou

COMP-421 Compiler Design. Presented by Dr Ioanna Dionysiou COMP-421 Compiler Design Presented by Dr Ioanna Dionysiou Administrative! [ALSU03] Chapter 3 - Lexical Analysis Sections 3.1-3.4, 3.6-3.7! Reading for next time [ALSU03] Chapter 3 Copyright (c) 2010 Ioanna

More information

Lexical Analysis/Scanning

Lexical Analysis/Scanning Compiler Design 1 Lexical Analysis/Scanning Compiler Design 2 Input and Output The input is a stream of characters (ASCII codes) of the source program. The output is a stream of tokens or symbols corresponding

More information

Figure 2.1: Role of Lexical Analyzer

Figure 2.1: Role of Lexical Analyzer Chapter 2 Lexical Analysis Lexical analysis or scanning is the process which reads the stream of characters making up the source program from left-to-right and groups them into tokens. The lexical analyzer

More information

Formal Languages and Compilers Lecture VI: Lexical Analysis

Formal Languages and Compilers Lecture VI: Lexical Analysis Formal Languages and Compilers Lecture VI: Lexical Analysis Free University of Bozen-Bolzano Faculty of Computer Science POS Building, Room: 2.03 artale@inf.unibz.it http://www.inf.unibz.it/ artale/ Formal

More information

COMPILER DESIGN UNIT I LEXICAL ANALYSIS. Translator: It is a program that translates one language to another Language.

COMPILER DESIGN UNIT I LEXICAL ANALYSIS. Translator: It is a program that translates one language to another Language. UNIT I LEXICAL ANALYSIS Translator: It is a program that translates one language to another Language. Source Code Translator Target Code 1. INTRODUCTION TO LANGUAGE PROCESSING The Language Processing System

More information

CSc 453 Lexical Analysis (Scanning)

CSc 453 Lexical Analysis (Scanning) CSc 453 Lexical Analysis (Scanning) Saumya Debray The University of Arizona Tucson Overview source program lexical analyzer (scanner) tokens syntax analyzer (parser) symbol table manager Main task: to

More information

Principles of Compiler Design Presented by, R.Venkadeshan,M.Tech-IT, Lecturer /CSE Dept, Chettinad College of Engineering &Technology

Principles of Compiler Design Presented by, R.Venkadeshan,M.Tech-IT, Lecturer /CSE Dept, Chettinad College of Engineering &Technology Principles of Compiler Design Presented by, R.Venkadeshan,M.Tech-IT, Lecturer /CSE Dept, Chettinad College of Engineering &Technology 6/30/2010 Principles of Compiler Design R.Venkadeshan 1 Preliminaries

More information

Zhizheng Zhang. Southeast University

Zhizheng Zhang. Southeast University Zhizheng Zhang Southeast University 2016/10/5 Lexical Analysis 1 1. The Role of Lexical Analyzer 2016/10/5 Lexical Analysis 2 2016/10/5 Lexical Analysis 3 Example. position = initial + rate * 60 2016/10/5

More information

Lexical Analysis. Prof. James L. Frankel Harvard University

Lexical Analysis. Prof. James L. Frankel Harvard University Lexical Analysis Prof. James L. Frankel Harvard University Version of 5:37 PM 30-Jan-2018 Copyright 2018, 2016, 2015 James L. Frankel. All rights reserved. Regular Expression Notation We will develop a

More information

UNIT -2 LEXICAL ANALYSIS

UNIT -2 LEXICAL ANALYSIS OVER VIEW OF LEXICAL ANALYSIS UNIT -2 LEXICAL ANALYSIS o To identify the tokens we need some method of describing the possible tokens that can appear in the input stream. For this purpose we introduce

More information

2010: Compilers REVIEW: REGULAR EXPRESSIONS HOW TO USE REGULAR EXPRESSIONS

2010: Compilers REVIEW: REGULAR EXPRESSIONS HOW TO USE REGULAR EXPRESSIONS 2010: Compilers Lexical Analysis: Finite State Automata Dr. Licia Capra UCL/CS REVIEW: REGULAR EXPRESSIONS a Character in A Empty string R S Alternation (either R or S) RS Concatenation (R followed by

More information

Chapter 3 Lexical Analysis

Chapter 3 Lexical Analysis Chapter 3 Lexical Analysis Outline Role of lexical analyzer Specification of tokens Recognition of tokens Lexical analyzer generator Finite automata Design of lexical analyzer generator The role of lexical

More information

Lexical Analyzer Scanner

Lexical Analyzer Scanner Lexical Analyzer Scanner ASU Textbook Chapter 3.1, 3.3, 3.4, 3.6, 3.7, 3.5 Tsan-sheng Hsu tshsu@iis.sinica.edu.tw http://www.iis.sinica.edu.tw/~tshsu 1 Main tasks Read the input characters and produce

More information

Lexical Analysis (ASU Ch 3, Fig 3.1)

Lexical Analysis (ASU Ch 3, Fig 3.1) Lexical Analysis (ASU Ch 3, Fig 3.1) Implementation by hand automatically ((F)Lex) Lex generates a finite automaton recogniser uses regular expressions Tasks remove white space (ws) display source program

More information

Finite automata. We have looked at using Lex to build a scanner on the basis of regular expressions.

Finite automata. We have looked at using Lex to build a scanner on the basis of regular expressions. Finite automata We have looked at using Lex to build a scanner on the basis of regular expressions. Now we begin to consider the results from automata theory that make Lex possible. Recall: An alphabet

More information

Compiler course. Chapter 3 Lexical Analysis

Compiler course. Chapter 3 Lexical Analysis Compiler course Chapter 3 Lexical Analysis 1 A. A. Pourhaji Kazem, Spring 2009 Outline Role of lexical analyzer Specification of tokens Recognition of tokens Lexical analyzer generator Finite automata

More information

1. INTRODUCTION TO LANGUAGE PROCESSING The Language Processing System can be represented as shown figure below.

1. INTRODUCTION TO LANGUAGE PROCESSING The Language Processing System can be represented as shown figure below. UNIT I Translator: It is a program that translates one language to another Language. Examples of translator are compiler, assembler, interpreter, linker, loader and preprocessor. Source Code Translator

More information

Lexical Analyzer Scanner

Lexical Analyzer Scanner Lexical Analyzer Scanner ASU Textbook Chapter 3.1, 3.3, 3.4, 3.6, 3.7, 3.5 Tsan-sheng Hsu tshsu@iis.sinica.edu.tw http://www.iis.sinica.edu.tw/~tshsu 1 Main tasks Read the input characters and produce

More information

Concepts Introduced in Chapter 3. Lexical Analysis. Lexical Analysis Terms. Attributes for Tokens

Concepts Introduced in Chapter 3. Lexical Analysis. Lexical Analysis Terms. Attributes for Tokens Concepts Introduced in Chapter 3 Lexical Analysis Regular Expressions (REs) Nondeterministic Finite Automata (NFA) Converting an RE to an NFA Deterministic Finite Automatic (DFA) Lexical Analysis Why separate

More information

David Griol Barres Computer Science Department Carlos III University of Madrid Leganés (Spain)

David Griol Barres Computer Science Department Carlos III University of Madrid Leganés (Spain) David Griol Barres dgriol@inf.uc3m.es Computer Science Department Carlos III University of Madrid Leganés (Spain) OUTLINE Introduction: Definitions The role of the Lexical Analyzer Scanner Implementation

More information

Introduction to Lexical Analysis

Introduction to Lexical Analysis Introduction to Lexical Analysis Outline Informal sketch of lexical analysis Identifies tokens in input string Issues in lexical analysis Lookahead Ambiguities Specifying lexical analyzers (lexers) Regular

More information

Lexical Analysis. Chapter 2

Lexical Analysis. Chapter 2 Lexical Analysis Chapter 2 1 Outline Informal sketch of lexical analysis Identifies tokens in input string Issues in lexical analysis Lookahead Ambiguities Specifying lexers Regular expressions Examples

More information

Computer Science Department Carlos III University of Madrid Leganés (Spain) David Griol Barres

Computer Science Department Carlos III University of Madrid Leganés (Spain) David Griol Barres Computer Science Department Carlos III University of Madrid Leganés (Spain) David Griol Barres dgriol@inf.uc3m.es Introduction: Definitions Lexical analysis or scanning: To read from left-to-right a source

More information

1. Lexical Analysis Phase

1. Lexical Analysis Phase 1. Lexical Analysis Phase The purpose of the lexical analyzer is to read the source program, one character at time, and to translate it into a sequence of primitive units called tokens. Keywords, identifiers,

More information

Lexical Analysis. Dragon Book Chapter 3 Formal Languages Regular Expressions Finite Automata Theory Lexical Analysis using Automata

Lexical Analysis. Dragon Book Chapter 3 Formal Languages Regular Expressions Finite Automata Theory Lexical Analysis using Automata Lexical Analysis Dragon Book Chapter 3 Formal Languages Regular Expressions Finite Automata Theory Lexical Analysis using Automata Phase Ordering of Front-Ends Lexical analysis (lexer) Break input string

More information

Lexical Analysis. Lecture 3-4

Lexical Analysis. Lecture 3-4 Lexical Analysis Lecture 3-4 Notes by G. Necula, with additions by P. Hilfinger Prof. Hilfinger CS 164 Lecture 3-4 1 Administrivia I suggest you start looking at Python (see link on class home page). Please

More information

Lexical Analysis - 2

Lexical Analysis - 2 Lexical Analysis - 2 More regular expressions Finite Automata NFAs and DFAs Scanners JLex - a scanner generator 1 Regular Expressions in JLex Symbol - Meaning. Matches a single character (not newline)

More information

CS6660 COMPILER DESIGN L T P C

CS6660 COMPILER DESIGN L T P C COMPILER DESIGN CS6660 COMPILER DESIGN L T P C 3 0 0 3 UNIT I INTRODUCTION TO COMPILERS 5 Translators-Compilation and Interpretation-Language processors -The Phases of CompilerErrors Encountered in Different

More information

Lexical Analysis. Lecture 2-4

Lexical Analysis. Lecture 2-4 Lexical Analysis Lecture 2-4 Notes by G. Necula, with additions by P. Hilfinger Prof. Hilfinger CS 164 Lecture 2 1 Administrivia Moving to 60 Evans on Wednesday HW1 available Pyth manual available on line.

More information

PRINCIPLES OF COMPILER DESIGN UNIT II LEXICAL ANALYSIS 2.1 Lexical Analysis - The Role of the Lexical Analyzer

PRINCIPLES OF COMPILER DESIGN UNIT II LEXICAL ANALYSIS 2.1 Lexical Analysis - The Role of the Lexical Analyzer PRINCIPLES OF COMPILER DESIGN UNIT II LEXICAL ANALYSIS 2.1 Lexical Analysis - The Role of the Lexical Analyzer As the first phase of a compiler, the main task of the lexical analyzer is to read the input

More information

2. Lexical Analysis! Prof. O. Nierstrasz!

2. Lexical Analysis! Prof. O. Nierstrasz! 2. Lexical Analysis! Prof. O. Nierstrasz! Thanks to Jens Palsberg and Tony Hosking for their kind permission to reuse and adapt the CS132 and CS502 lecture notes.! http://www.cs.ucla.edu/~palsberg/! http://www.cs.purdue.edu/homes/hosking/!

More information

Formal Languages and Compilers Lecture IV: Regular Languages and Finite. Finite Automata

Formal Languages and Compilers Lecture IV: Regular Languages and Finite. Finite Automata Formal Languages and Compilers Lecture IV: Regular Languages and Finite Automata Free University of Bozen-Bolzano Faculty of Computer Science POS Building, Room: 2.03 artale@inf.unibz.it http://www.inf.unibz.it/

More information

CSE302: Compiler Design

CSE302: Compiler Design CSE302: Compiler Design Instructor: Dr. Liang Cheng Department of Computer Science and Engineering P.C. Rossin College of Engineering & Applied Science Lehigh University February 13, 2007 Outline Recap

More information

Lexical Analysis. Implementation: Finite Automata

Lexical Analysis. Implementation: Finite Automata Lexical Analysis Implementation: Finite Automata Outline Specifying lexical structure using regular expressions Finite automata Deterministic Finite Automata (DFAs) Non-deterministic Finite Automata (NFAs)

More information

Implementation of Lexical Analysis

Implementation of Lexical Analysis Implementation of Lexical Analysis Outline Specifying lexical structure using regular expressions Finite automata Deterministic Finite Automata (DFAs) Non-deterministic Finite Automata (NFAs) Implementation

More information

Compiler Construction

Compiler Construction Compiler Construction Thomas Noll Software Modeling and Verification Group RWTH Aachen University https://moves.rwth-aachen.de/teaching/ss-16/cc/ Conceptual Structure of a Compiler Source code x1 := y2

More information

Implementation of Lexical Analysis

Implementation of Lexical Analysis Implementation of Lexical Analysis Outline Specifying lexical structure using regular expressions Finite automata Deterministic Finite Automata (DFAs) Non-deterministic Finite Automata (NFAs) Implementation

More information

Outline. 1 Scanning Tokens. 2 Regular Expresssions. 3 Finite State Automata

Outline. 1 Scanning Tokens. 2 Regular Expresssions. 3 Finite State Automata Outline 1 2 Regular Expresssions Lexical Analysis 3 Finite State Automata 4 Non-deterministic (NFA) Versus Deterministic Finite State Automata (DFA) 5 Regular Expresssions to NFA 6 NFA to DFA 7 8 JavaCC:

More information

About the Tutorial. Audience. Prerequisites. Copyright & Disclaimer. Compiler Design

About the Tutorial. Audience. Prerequisites. Copyright & Disclaimer. Compiler Design i About the Tutorial A compiler translates the codes written in one language to some other language without changing the meaning of the program. It is also expected that a compiler should make the target

More information

Introduction to Lexical Analysis

Introduction to Lexical Analysis Introduction to Lexical Analysis Outline Informal sketch of lexical analysis Identifies tokens in input string Issues in lexical analysis Lookahead Ambiguities Specifying lexers Regular expressions Examples

More information

Lexical Analysis. Lecture 3. January 10, 2018

Lexical Analysis. Lecture 3. January 10, 2018 Lexical Analysis Lecture 3 January 10, 2018 Announcements PA1c due tonight at 11:50pm! Don t forget about PA1, the Cool implementation! Use Monday s lecture, the video guides and Cool examples if you re

More information

Lexical Analysis 1 / 52

Lexical Analysis 1 / 52 Lexical Analysis 1 / 52 Outline 1 Scanning Tokens 2 Regular Expresssions 3 Finite State Automata 4 Non-deterministic (NFA) Versus Deterministic Finite State Automata (DFA) 5 Regular Expresssions to NFA

More information

LANGUAGE TRANSLATORS

LANGUAGE TRANSLATORS 1 LANGUAGE TRANSLATORS UNIT: 3 Syllabus Source Program Analysis: Compilers Analysis of the Source Program Phases of a Compiler Cousins of Compiler Grouping of Phases Compiler Construction Tools. Lexical

More information

Compiler Construction

Compiler Construction Compiler Construction Lecture 2: Lexical Analysis I (Introduction) Thomas Noll Lehrstuhl für Informatik 2 (Software Modeling and Verification) noll@cs.rwth-aachen.de http://moves.rwth-aachen.de/teaching/ss-14/cc14/

More information

CS 536 Introduction to Programming Languages and Compilers Charles N. Fischer Lecture 5

CS 536 Introduction to Programming Languages and Compilers Charles N. Fischer Lecture 5 CS 536 Introduction to Programming Languages and Compilers Charles N. Fischer Lecture 5 CS 536 Spring 2015 1 Multi Character Lookahead We may allow finite automata to look beyond the next input character.

More information

CSE 413 Programming Languages & Implementation. Hal Perkins Autumn 2012 Grammars, Scanners & Regular Expressions

CSE 413 Programming Languages & Implementation. Hal Perkins Autumn 2012 Grammars, Scanners & Regular Expressions CSE 413 Programming Languages & Implementation Hal Perkins Autumn 2012 Grammars, Scanners & Regular Expressions 1 Agenda Overview of language recognizers Basic concepts of formal grammars Scanner Theory

More information

CSEP 501 Compilers. Languages, Automata, Regular Expressions & Scanners Hal Perkins Winter /8/ Hal Perkins & UW CSE B-1

CSEP 501 Compilers. Languages, Automata, Regular Expressions & Scanners Hal Perkins Winter /8/ Hal Perkins & UW CSE B-1 CSEP 501 Compilers Languages, Automata, Regular Expressions & Scanners Hal Perkins Winter 2008 1/8/2008 2002-08 Hal Perkins & UW CSE B-1 Agenda Basic concepts of formal grammars (review) Regular expressions

More information

Regular Expressions. Agenda for Today. Grammar for a Tiny Language. Programming Language Specifications

Regular Expressions. Agenda for Today. Grammar for a Tiny Language. Programming Language Specifications Agenda for Today Regular Expressions CSE 413, Autumn 2005 Programming Languages Basic concepts of formal grammars Regular expressions Lexical specification of programming languages Using finite automata

More information

UNIT I- LEXICAL ANALYSIS. 1.Interpreter: It is one of the translators that translate high level language to low level language.

UNIT I- LEXICAL ANALYSIS. 1.Interpreter: It is one of the translators that translate high level language to low level language. INRODUCTION TO COMPILING UNIT I- LEXICAL ANALYSIS Translator: It is a program that translates one language to another. Types of Translator: 1.Interpreter 2.Compiler 3.Assembler source code Translator target

More information

Chapter 4. Lexical analysis. Concepts. Lexical scanning Regular expressions DFAs and FSAs Lex. Lexical analysis in perspective

Chapter 4. Lexical analysis. Concepts. Lexical scanning Regular expressions DFAs and FSAs Lex. Lexical analysis in perspective Chapter 4 Lexical analysis Lexical scanning Regular expressions DFAs and FSAs Lex Concepts CMSC 331, Some material 1998 by Addison Wesley Longman, Inc. 1 CMSC 331, Some material 1998 by Addison Wesley

More information

G Compiler Construction Lecture 4: Lexical Analysis. Mohamed Zahran (aka Z)

G Compiler Construction Lecture 4: Lexical Analysis. Mohamed Zahran (aka Z) G22.2130-001 Compiler Construction Lecture 4: Lexical Analysis Mohamed Zahran (aka Z) mzahran@cs.nyu.edu Role of the Lexical Analyzer Remove comments and white spaces (aka scanning) Macros expansion Read

More information

Lexical Analysis. Introduction

Lexical Analysis. Introduction Lexical Analysis Introduction Copyright 2015, Pedro C. Diniz, all rights reserved. Students enrolled in the Compilers class at the University of Southern California have explicit permission to make copies

More information

CSCI-GA Compiler Construction Lecture 4: Lexical Analysis I. Hubertus Franke

CSCI-GA Compiler Construction Lecture 4: Lexical Analysis I. Hubertus Franke CSCI-GA.2130-001 Compiler Construction Lecture 4: Lexical Analysis I Hubertus Franke frankeh@cs.nyu.edu Role of the Lexical Analyzer Remove comments and white spaces (aka scanning) Macros expansion Read

More information

Syntactic Analysis. CS345H: Programming Languages. Lecture 3: Lexical Analysis. Outline. Lexical Analysis. What is a Token? Tokens

Syntactic Analysis. CS345H: Programming Languages. Lecture 3: Lexical Analysis. Outline. Lexical Analysis. What is a Token? Tokens Syntactic Analysis CS45H: Programming Languages Lecture : Lexical Analysis Thomas Dillig Main Question: How to give structure to strings Analogy: Understanding an English sentence First, we separate a

More information

Concepts. Lexical scanning Regular expressions DFAs and FSAs Lex. Lexical analysis in perspective

Concepts. Lexical scanning Regular expressions DFAs and FSAs Lex. Lexical analysis in perspective Concepts Lexical scanning Regular expressions DFAs and FSAs Lex CMSC 331, Some material 1998 by Addison Wesley Longman, Inc. 1 CMSC 331, Some material 1998 by Addison Wesley Longman, Inc. 2 Lexical analysis

More information

[Syntax Directed Translation] Bikash Balami

[Syntax Directed Translation] Bikash Balami 1 [Syntax Directed Translation] Compiler Design and Construction (CSc 352) Compiled By Central Department of Computer Science and Information Technology (CDCSIT) Tribhuvan University, Kirtipur Kathmandu,

More information

Lexical Analysis. COMP 524, Spring 2014 Bryan Ward

Lexical Analysis. COMP 524, Spring 2014 Bryan Ward Lexical Analysis COMP 524, Spring 2014 Bryan Ward Based in part on slides and notes by J. Erickson, S. Krishnan, B. Brandenburg, S. Olivier, A. Block and others The Big Picture Character Stream Scanner

More information

CSc 453 Compilers and Systems Software

CSc 453 Compilers and Systems Software CSc 453 Compilers and Systems Software 3 : Lexical Analysis I Christian Collberg Department of Computer Science University of Arizona collberg@gmail.com Copyright c 2009 Christian Collberg August 23, 2009

More information

Scanners. Xiaokang Qiu Purdue University. August 24, ECE 468 Adapted from Kulkarni 2012

Scanners. Xiaokang Qiu Purdue University. August 24, ECE 468 Adapted from Kulkarni 2012 Scanners Xiaokang Qiu Purdue University ECE 468 Adapted from Kulkarni 2012 August 24, 2016 Scanners Sometimes called lexers Recall: scanners break input stream up into a set of tokens Identifiers, reserved

More information

Implementation of Lexical Analysis. Lecture 4

Implementation of Lexical Analysis. Lecture 4 Implementation of Lexical Analysis Lecture 4 1 Tips on Building Large Systems KISS (Keep It Simple, Stupid!) Don t optimize prematurely Design systems that can be tested It is easier to modify a working

More information

Administrivia. Lexical Analysis. Lecture 2-4. Outline. The Structure of a Compiler. Informal sketch of lexical analysis. Issues in lexical analysis

Administrivia. Lexical Analysis. Lecture 2-4. Outline. The Structure of a Compiler. Informal sketch of lexical analysis. Issues in lexical analysis dministrivia Lexical nalysis Lecture 2-4 Notes by G. Necula, with additions by P. Hilfinger Moving to 6 Evans on Wednesday HW available Pyth manual available on line. Please log into your account and electronically

More information

Part 5 Program Analysis Principles and Techniques

Part 5 Program Analysis Principles and Techniques 1 Part 5 Program Analysis Principles and Techniques Front end 2 source code scanner tokens parser il errors Responsibilities: Recognize legal programs Report errors Produce il Preliminary storage map Shape

More information

Lexical Analysis. Lexical analysis is the first phase of compilation: The file is converted from ASCII to tokens. It must be fast!

Lexical Analysis. Lexical analysis is the first phase of compilation: The file is converted from ASCII to tokens. It must be fast! Lexical Analysis Lexical analysis is the first phase of compilation: The file is converted from ASCII to tokens. It must be fast! Compiler Passes Analysis of input program (front-end) character stream

More information

Structure of Programming Languages Lecture 3

Structure of Programming Languages Lecture 3 Structure of Programming Languages Lecture 3 CSCI 6636 4536 Spring 2017 CSCI 6636 4536 Lecture 3... 1/25 Spring 2017 1 / 25 Outline 1 Finite Languages Deterministic Finite State Machines Lexical Analysis

More information

Front End: Lexical Analysis. The Structure of a Compiler

Front End: Lexical Analysis. The Structure of a Compiler Front End: Lexical Analysis The Structure of a Compiler Constructing a Lexical Analyser By hand: Identify lexemes in input and return tokens Automatically: Lexical-Analyser generator We will learn about

More information

Implementation of Lexical Analysis

Implementation of Lexical Analysis Implementation of Lexical Analysis Lecture 4 (Modified by Professor Vijay Ganesh) Tips on Building Large Systems KISS (Keep It Simple, Stupid!) Don t optimize prematurely Design systems that can be tested

More information

Lexical Analysis. Chapter 1, Section Chapter 3, Section 3.1, 3.3, 3.4, 3.5 JFlex Manual

Lexical Analysis. Chapter 1, Section Chapter 3, Section 3.1, 3.3, 3.4, 3.5 JFlex Manual Lexical Analysis Chapter 1, Section 1.2.1 Chapter 3, Section 3.1, 3.3, 3.4, 3.5 JFlex Manual Inside the Compiler: Front End Lexical analyzer (aka scanner) Converts ASCII or Unicode to a stream of tokens

More information

Programming Languages (CS 550) Lecture 4 Summary Scanner and Parser Generators. Jeremy R. Johnson

Programming Languages (CS 550) Lecture 4 Summary Scanner and Parser Generators. Jeremy R. Johnson Programming Languages (CS 550) Lecture 4 Summary Scanner and Parser Generators Jeremy R. Johnson 1 Theme We have now seen how to describe syntax using regular expressions and grammars and how to create

More information

2068 (I) Attempt all questions.

2068 (I) Attempt all questions. 2068 (I) 1. What do you mean by compiler? How source program analyzed? Explain in brief. 2. Discuss the role of symbol table in compiler design. 3. Convert the regular expression 0 + (1 + 0)* 00 first

More information

Announcements! P1 part 1 due next Tuesday P1 part 2 due next Friday

Announcements! P1 part 1 due next Tuesday P1 part 2 due next Friday Announcements! P1 part 1 due next Tuesday P1 part 2 due next Friday 1 Finite-state machines CS 536 Last time! A compiler is a recognizer of language S (Source) a translator from S to T (Target) a program

More information

Decision, Computation and Language

Decision, Computation and Language Decision, Computation and Language Regular Expressions Dr. Muhammad S Khan (mskhan@liv.ac.uk) Ashton Building, Room G22 http://www.csc.liv.ac.uk/~khan/comp218 Regular expressions M S Khan (Univ. of Liverpool)

More information

Last lecture CMSC330. This lecture. Finite Automata: States. Finite Automata. Implementing Regular Expressions. Languages. Regular expressions

Last lecture CMSC330. This lecture. Finite Automata: States. Finite Automata. Implementing Regular Expressions. Languages. Regular expressions Last lecture CMSC330 Finite Automata Languages Sets of strings Operations on languages Regular expressions Constants Operators Precedence 1 2 Finite automata States Transitions Examples Types This lecture

More information

for (i=1; i<=100000; i++) { x = sqrt (y); // square root function cout << x+i << endl; }

for (i=1; i<=100000; i++) { x = sqrt (y); // square root function cout << x+i << endl; } Ex: The difference between Compiler and Interpreter The interpreter actually carries out the computations specified in the source program. In other words, the output of a compiler is a program, whereas

More information

for (i=1; i<=100000; i++) { x = sqrt (y); // square root function cout << x+i << endl; }

for (i=1; i<=100000; i++) { x = sqrt (y); // square root function cout << x+i << endl; } Ex: The difference between Compiler and Interpreter The interpreter actually carries out the computations specified in the source program. In other words, the output of a compiler is a program, whereas

More information

Regular Languages. MACM 300 Formal Languages and Automata. Formal Languages: Recap. Regular Languages

Regular Languages. MACM 300 Formal Languages and Automata. Formal Languages: Recap. Regular Languages Regular Languages MACM 3 Formal Languages and Automata Anoop Sarkar http://www.cs.sfu.ca/~anoop The set of regular languages: each element is a regular language Each regular language is an example of a

More information

Monday, August 26, 13. Scanners

Monday, August 26, 13. Scanners Scanners Scanners Sometimes called lexers Recall: scanners break input stream up into a set of tokens Identifiers, reserved words, literals, etc. What do we need to know? How do we define tokens? How can

More information

We use L i to stand for LL L (i times). It is logical to define L 0 to be { }. The union of languages L and M is given by

We use L i to stand for LL L (i times). It is logical to define L 0 to be { }. The union of languages L and M is given by The term languages to mean any set of string formed from some specific alphaet. The notation of concatenation can also e applied to languages. If L and M are languages, then L.M is the language consisting

More information

Wednesday, September 3, 14. Scanners

Wednesday, September 3, 14. Scanners Scanners Scanners Sometimes called lexers Recall: scanners break input stream up into a set of tokens Identifiers, reserved words, literals, etc. What do we need to know? How do we define tokens? How can

More information

Compiler Construction LECTURE # 3

Compiler Construction LECTURE # 3 Compiler Construction LECTURE # 3 The Course Course Code: CS-4141 Course Title: Compiler Construction Instructor: JAWAD AHMAD Email Address: jawadahmad@uoslahore.edu.pk Web Address: http://csandituoslahore.weebly.com/cc.html

More information

CS412/413. Introduction to Compilers Tim Teitelbaum. Lecture 2: Lexical Analysis 23 Jan 08

CS412/413. Introduction to Compilers Tim Teitelbaum. Lecture 2: Lexical Analysis 23 Jan 08 CS412/413 Introduction to Compilers Tim Teitelbaum Lecture 2: Lexical Analysis 23 Jan 08 Outline Review compiler structure What is lexical analysis? Writing a lexer Specifying tokens: regular expressions

More information

CS Lecture 2. The Front End. Lecture 2 Lexical Analysis

CS Lecture 2. The Front End. Lecture 2 Lexical Analysis CS 1622 Lecture 2 Lexical Analysis CS 1622 Lecture 2 1 Lecture 2 Review of last lecture and finish up overview The first compiler phase: lexical analysis Reading: Chapter 2 in text (by 1/18) CS 1622 Lecture

More information

Question Bank. 10CS63:Compiler Design

Question Bank. 10CS63:Compiler Design Question Bank 10CS63:Compiler Design 1.Determine whether the following regular expressions define the same language? (ab)* and a*b* 2.List the properties of an operator grammar 3. Is macro processing a

More information

Lexical Analysis. Finite Automata

Lexical Analysis. Finite Automata #1 Lexical Analysis Finite Automata Cool Demo? (Part 1 of 2) #2 Cunning Plan Informal Sketch of Lexical Analysis LA identifies tokens from input string lexer : (char list) (token list) Issues in Lexical

More information

The Front End. The purpose of the front end is to deal with the input language. Perform a membership test: code source language?

The Front End. The purpose of the front end is to deal with the input language. Perform a membership test: code source language? The Front End Source code Front End IR Back End Machine code Errors The purpose of the front end is to deal with the input language Perform a membership test: code source language? Is the program well-formed

More information

SEM / YEAR : VI / III CS2352 PRINCIPLES OF COMPLIERS DESIGN UNIT I - LEXICAL ANALYSIS PART - A

SEM / YEAR : VI / III CS2352 PRINCIPLES OF COMPLIERS DESIGN UNIT I - LEXICAL ANALYSIS PART - A SEM / YEAR : VI / III CS2352 PRINCIPLES OF COMPLIERS DESIGN UNIT I - LEXICAL ANALYSIS PART - A 1. What is a compiler? (A.U Nov/Dec 2007) A compiler is a program that reads a program written in one language

More information

Lexical Error Recovery

Lexical Error Recovery Lexical Error Recovery A character sequence that can t be scanned into any valid token is a lexical error. Lexical errors are uncommon, but they still must be handled by a scanner. We won t stop compilation

More information

Languages and Compilers

Languages and Compilers Principles of Software Engineering and Operational Systems Languages and Compilers SDAGE: Level I 2012-13 3. Formal Languages, Grammars and Automata Dr Valery Adzhiev vadzhiev@bournemouth.ac.uk Office:

More information

Compiler phases. Non-tokens

Compiler phases. Non-tokens Compiler phases Compiler Construction Scanning Lexical Analysis source code scanner tokens regular expressions lexical analysis Lennart Andersson parser context free grammar Revision 2011 01 21 parse tree

More information

Regular Languages and Regular Expressions

Regular Languages and Regular Expressions Regular Languages and Regular Expressions According to our definition, a language is regular if there exists a finite state automaton that accepts it. Therefore every regular language can be described

More information

CT32 COMPUTER NETWORKS DEC 2015

CT32 COMPUTER NETWORKS DEC 2015 Q.2 a. Using the principle of mathematical induction, prove that (10 (2n-1) +1) is divisible by 11 for all n N (8) Let P(n): (10 (2n-1) +1) is divisible by 11 For n = 1, the given expression becomes (10

More information

Chapter 3: Lexical Analysis

Chapter 3: Lexical Analysis Chapter 3: Lexical Analysis A simple way to build a lexical analyzer is to construct a diagram that illustrates the structure of tokens of the source language, and then to hand translate the diagram into

More information

Chapter 3: Lexing and Parsing

Chapter 3: Lexing and Parsing Chapter 3: Lexing and Parsing Aarne Ranta Slides for the book Implementing Programming Languages. An Introduction to Compilers and Interpreters, College Publications, 2012. Lexing and Parsing* Deeper understanding

More information

R10 SET a) Construct a DFA that accepts an identifier of a C programming language. b) Differentiate between NFA and DFA?

R10 SET a) Construct a DFA that accepts an identifier of a C programming language. b) Differentiate between NFA and DFA? R1 SET - 1 1. a) Construct a DFA that accepts an identifier of a C programming language. b) Differentiate between NFA and DFA? 2. a) Design a DFA that accepts the language over = {, 1} of all strings that

More information

Finite Automata. Dr. Nadeem Akhtar. Assistant Professor Department of Computer Science & IT The Islamia University of Bahawalpur

Finite Automata. Dr. Nadeem Akhtar. Assistant Professor Department of Computer Science & IT The Islamia University of Bahawalpur Finite Automata Dr. Nadeem Akhtar Assistant Professor Department of Computer Science & IT The Islamia University of Bahawalpur PhD Laboratory IRISA-UBS University of South Brittany European University

More information

COMPILER DESIGN LECTURE NOTES

COMPILER DESIGN LECTURE NOTES COMPILER DESIGN LECTURE NOTES UNIT -1 1.1 OVERVIEW OF LANGUAGE PROCESSING SYSTEM 1.2 Preprocessor A preprocessor produce input to compilers. They may perform the following functions. 1. Macro processing:

More information

Lexical Analysis. Finite Automata

Lexical Analysis. Finite Automata #1 Lexical Analysis Finite Automata Cool Demo? (Part 1 of 2) #2 Cunning Plan Informal Sketch of Lexical Analysis LA identifies tokens from input string lexer : (char list) (token list) Issues in Lexical

More information