MODEL ANSWERS COMP36512, May 2016

Similar documents
Two hours UNIVERSITY OF MANCHESTER SCHOOL OF COMPUTER SCIENCE. Date: Friday 20th May 2016 Time: 14:00-16:00

INSTITUTE OF AERONAUTICAL ENGINEERING

Comp 204: Computer Systems and Their Implementation. Lecture 22: Code Generation and Optimisation

ECE 468/573 Midterm 1 October 1, 2014

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING Subject Name: CS2352 Principles of Compiler Design Year/Sem : III/VI

CS5363 Final Review. cs5363 1

Lecture 7: Deterministic Bottom-Up Parsing

CS 2210 Sample Midterm. 1. Determine if each of the following claims is true (T) or false (F).

EDAN65: Compilers, Lecture 06 A LR parsing. Görel Hedin Revised:

Bottom-Up Parsing. Lecture 11-12

Lecture 8: Deterministic Bottom-Up Parsing

INSTITUTE OF AERONAUTICAL ENGINEERING (AUTONOMOUS)

Compiler Design Aug 1996

CS153: Compilers Lecture 15: Local Optimization

Syntax Analysis, III Comp 412

Faculty of Electrical Engineering, Mathematics, and Computer Science Delft University of Technology

Lecture 14: Parser Conflicts, Using Ambiguity, Error Recovery. Last modified: Mon Feb 23 10:05: CS164: Lecture #14 1

PART 3 - SYNTAX ANALYSIS. F. Wotawa TU Graz) Compiler Construction Summer term / 309

There are 16 total numbered pages, 7 Questions. You have 2 hours. Budget your time carefully!

EDAN65: Compilers, Lecture 04 Grammar transformations: Eliminating ambiguities, adapting to LL parsing. Görel Hedin Revised:

CS 4120 Introduction to Compilers

LECTURE NOTES ON COMPILER DESIGN P a g e 2

A Bad Name. CS 2210: Optimization. Register Allocation. Optimization. Reaching Definitions. Dataflow Analyses 4/10/2013

Table-driven using an explicit stack (no recursion!). Stack can be viewed as containing both terminals and non-terminals.

Parsing III. CS434 Lecture 8 Spring 2005 Department of Computer Science University of Alabama Joel Jones

Bottom-Up Parsing. Lecture 11-12

Parsing. Handle, viable prefix, items, closures, goto s LR(k): SLR(1), LR(1), LALR(1)

Compiler Design. Fall Control-Flow Analysis. Prof. Pedro C. Diniz

About the Authors... iii Introduction... xvii. Chapter 1: System Software... 1

Midterm 2 Solutions Many acceptable answers; one was the following: (defparameter g1

SYED AMMAL ENGINEERING COLLEGE (An ISO 9001:2008 Certified Institution) Dr. E.M. Abdullah Campus, Ramanathapuram

Chapter 4. Lexical and Syntax Analysis

University of Technology Department of Computer Sciences. Final Examination st Term. Subject:Compilers Design

Syntax Analysis, III Comp 412

Compiling Techniques

Appendix Set Notation and Concepts

S Y N T A X A N A L Y S I S LR

Building a Runnable Program and Code Improvement. Dario Marasco, Greg Klepic, Tess DiStefano

Register Allocation. CS 502 Lecture 14 11/25/08

CSCI Compiler Design

LR Parsing E T + E T 1 T

Introduction to C Language (M3-R )

CSE 401 Final Exam. March 14, 2017 Happy π Day! (3/14) This exam is closed book, closed notes, closed electronics, closed neighbors, open mind,...

Run-time Environments - 2

Type checking. Jianguo Lu. November 27, slides adapted from Sean Treichler and Alex Aiken s. Jianguo Lu November 27, / 39

Compiler Code Generation COMP360

Midterm I - Solution CS164, Spring 2014

CS 406/534 Compiler Construction Putting It All Together

1. Consider the following program in a PCAT-like language.

Torben./Egidius Mogensen. Introduction. to Compiler Design. ^ Springer

CSCI Compiler Design

COL728 Minor1 Exam Compiler Design Sem II, Answer all 5 questions Max. Marks: 20

COMP-421 Compiler Design. Presented by Dr Ioanna Dionysiou

Running class Timing on Java HotSpot VM, 1

Outline CS412/413. Administrivia. Review. Grammars. Left vs. Right Recursion. More tips forll(1) grammars Bottom-up parsing LR(0) parser construction

Question Marks 1 /20 2 /16 3 /7 4 /10 5 /16 6 /7 7 /24 Total /100

CS143 Handout 20 Summer 2011 July 15 th, 2011 CS143 Practice Midterm and Solution

Formal Languages and Compilers Lecture VII Part 3: Syntactic A

CS 4240: Compilers and Interpreters Project Phase 1: Scanner and Parser Due Date: October 4 th 2015 (11:59 pm) (via T-square)

Notes on the Exam. Question 1. Today. Comp 104:Operating Systems Concepts 11/05/2015. Revision Lectures (separate questions and answers)

CSE443 Compilers. Dr. Carl Alphonce 343 Davis Hall

Context-free grammars

Midterm I (Solutions) CS164, Spring 2002

CSc 372. Comparative Programming Languages. 2 : Functional Programming. Department of Computer Science University of Arizona

Comp 204: Computer Systems and Their Implementation. Lecture 25a: Revision Lectures (separate questions and answers)

1. Explain the input buffer scheme for scanning the source program. How the use of sentinels can improve its performance? Describe in detail.

DEPARTMENT OF INFORMATION TECHNOLOGY / COMPUTER SCIENCE AND ENGINEERING UNIT -1-INTRODUCTION TO COMPILERS 2 MARK QUESTIONS

Lexical and Syntax Analysis (2)

Compiler Construction: Parsing

Parsing. Note by Baris Aktemur: Our slides are adapted from Cooper and Torczon s slides that they prepared for COMP 412 at Rice.

CSE 401 Compilers. LR Parsing Hal Perkins Autumn /10/ Hal Perkins & UW CSE D-1

Action Table for CSX-Lite. LALR Parser Driver. Example of LALR(1) Parsing. GoTo Table for CSX-Lite

VALLIAMMAI ENGNIEERING COLLEGE SRM Nagar, Kattankulathur

COMP 181. Agenda. Midterm topics. Today: type checking. Purpose of types. Type errors. Type checking

Question Points Score

CS 6353 Compiler Construction, Homework #3

Question 1. Notes on the Exam. Today. Comp 104: Operating Systems Concepts 11/05/2015. Revision Lectures

CHAPTER 3. Register allocation

ECE 2035 A Programming Hw/Sw Systems Fall problems, 8 pages Final Exam Solutions 9 December 2015

Compiler Architecture

UNIT-III BOTTOM-UP PARSING

Syntax Analysis, V Bottom-up Parsing & The Magic of Handles Comp 412

CS1622. Today. A Recursive Descent Parser. Preliminaries. Lecture 9 Parsing (4)

Intermediate Representations

LL(k) Parsing. Predictive Parsers. LL(k) Parser Structure. Sample Parse Table. LL(1) Parsing Algorithm. Push RHS in Reverse Order 10/17/2012

Grammars. CS434 Lecture 15 Spring 2005 Department of Computer Science University of Alabama Joel Jones

Let us construct the LR(1) items for the grammar given below to construct the LALR parsing table.

Compiler construction 2009

Syntactic Analysis. The Big Picture Again. Grammar. ICS312 Machine-Level and Systems Programming

CS143 Final Spring 2016

CSE 5317 Midterm Examination 4 March Solutions

CMSC330 Fall 2014 Midterm 2 Solutions

Code Generation. CS 540 George Mason University

Conflicts in LR Parsing and More LR Parsing Types

Run-time Environments - 3

Review. Pat Morin COMP 3002

CS 164 Handout 16. Final Examination. There are nine questions on the exam, some in multiple parts. You have 3 hours to work on the

Parsing Part II (Top-down parsing, left-recursion removal)

Parsers. Xiaokang Qiu Purdue University. August 31, 2018 ECE 468

Lecture Notes on Shift-Reduce Parsing

Transcription:

MODEL ANSWERS COMP36512, May 2016 QUESTION 1: a) Clearly: 1-g, 2-d, 3-h, 4-e, 5-i, 6-a, 7-b, 8-c, 9-f. 0.5 marks for each correct answer rounded up as no halves are used. b) i) It has been mentioned in the lectures that FORTRAN stores arrays in a column-major fashion. Therefore, the first 2 columns of I4 would map to I1, the second 2 columns would map to I2, the third 2 columns would map to I3. Thus, the problem boils down to finding how to map a (50,2) array to a (10,10) array. The answer is that (i,j) would map to (1+((i-1)mod 10), (j- 1)*5+int((i-1)/10)+1). It will require some thought and it may be challenging, hence 6 marks. b) ii) It has been mentioned in the lectures that compilers typically collapse any high-dimension array to consecutive memory locations. Using this capability, the compiler can convert all highdimension arrays of two COMMON statements to single-dimension, virtual consecutive memory locations, which can then be linked together. Hence, this mapping is done through mapping to consecutive memory locations. b) iii) Clearly, COMMON statements are also a way to reuse memory space and minimize the memory requirements. This was particularly important in times when memory was limited. It can also be seen as a simple substitute for dynamic memory allocation. Page 1 of 6

QUESTION 2 COMP36512 a) Clearly any integer greater than 599 will have either at least 4 digits or 3 digits and the first digit will be 6, 7, 8 or 9. Hence, the regular expression is: Integer599 digit0 digit digit digit digit* 6 digit digit 7 digit digit 8 digit digit 9 digit digit where digit is any digit (0 to 9) and digit0 is any digit except 0 b) The regular expressions are: b (ab)* and (ba ab) b* The DFAs are: [not drawn] c) i) Yes, e.g., digit* (1 3 5 7 9) ii) No, regular expressions cannot cope with memory. iii) No, same as above, we cannot capture the notion of same number of digits. iv) Yes, 11*00*. d) The regular expression is: Float digit+. digit+ ((E (+ - ) digit+) ) Digit 0 1 2 9 e) The DFA is: [not drawn] Page 2 of 6

QUESTION 3 COMP36512 a) i) This is both a left-recursive (because of rule 9) and a right-recursive (because of rule 2) grammar. Top-down parsing methods cannot handle left-recursive grammars because they attempt to build a leftmost derivation and left-recursion may lead to infinite substitutions. (3 marks) a) ii) G-> L -> SL -> If E then S else S endif L -> If id then S else S endif L -> If id then if E then S endif else S endif L -> If id then if id then S endif else S endif L -> If id then if id then id=e endif else S endif L -> If id then if id then id=constant endif else S endif L -> If id then if id then id=constant endif else id=e endif L -> If id then if id then id=constant endif else id=constant endif L -> If id then if id then id=constant endif else id=constant endif S L -> If id then if id then id=constant endif else id=constant endif id=e L -> If id then if id then id=constant endif else id=constant endif id=id L -> If id then if id then id=constant endif else id=constant endif id=id endprogram [parse tree drawn here] a) iii) To construct a top-down predictive parser with one symbol of lookahead, we should: 1) eliminate left recursion (as a result of the rule E E id). This rule (and all other rules that have E on the left hand side) can be replaced by: E id Id_Rest, E constant Id_Rest and Id_Rest + id Id_Rest (essentially, the left recursion is transformed to right recursion, but we don t care about the latter!) 2) make sure that the grammar has the LL(1) property (currently, S has two productions with common prefix that need to be factored). Hence, rewriting these two rules (S if E then S endif and S if E then S else S endif) we get: S if E then S Remaining_If and Remaining_If else S endif endif b) $ s0 zxy shift 5 $ s0 z s5 xy reduce (5) $ s0 B s4 xy shift 7 $ s0 B s4 x s7 y reduce (4) $ s0 A s3 y shift 6 $ s0 A s3 y s6 eof reduce (3) $ s0 S s1 eof accept Page 3 of 6

COMP36512 QUESTION 4 a) For the call graph see: [not drawn] When the execution reaches the printf statement for the first time, it has called the following functions: A(4) (from main), A(2) (from A), A(0) (from A), C(0) (from A), B(0) (from C). So, including main( ), there will be six activation records in the stack, each activation record corresponding to a function call. b) After constant propagation n is replaced by 64 throughout. After constant folding tmp=64*4 is replaced by tmp=256. After copy propagation tmp is replaced by 256 throughout. Dead-code elimination would remove tmp. Finally, after reduction in strength: n=64 i=1 L1: if (i>=64) goto L2 A(i)=256 B(i*4)=1 i=i+1 if (i<64) goto L1 L2: Of course students have to show the code after each transformation is applied as requested. c) One simple solution is the following: for (i=0; i<n-s+1; i+=s) { loop_body (repeated s times, from i to i+s-1) } for (j=i; j<n; j++) { loop_body } Loop unrolling has been mentioned in the lectures in terms of how it is applied, but not as in the above (i.e., how it is derived). d) Java consists of many small procedures (i.e., methods). To minimize the overhead of calling these, method inlining, which is the replacement of a method call by the body of the method, is particularly useful. Also, to minimize the overhead of garbage collection, a compiler optimisation can be used to allocate objects that are not accessible beyond a procedure on the stack, instead of the heap. Page 4 of 6

QUESTION 5 COMP36512 a) i) All we need to do is find live ranges: r1 [1,4]; r2 [2,4]; r3 [3,6]; r4 [4,7]; r5 [5,8]; r6 [6,7]; r7 [7,8]; r8 [8,8] R1 X X X X R2 X X X R3 X X X X R4 X X X X R5 X X X X R6 X X R7 X X R8 X We can observe that no more than 4 registers need to be live at any given instruction. a) ii) Best s algorithm can be used to ensure that for each operation the operands are already in a physical register (if they aren t, a physical register is allocated) if they are not needed after the operation they are freed. As for the left hand side of the operation, a free register or the one used the farthest in the future is used (in which case, the current value will have to be spilled to the memory). One possible outcome of the algorithm is: 1. load r1, @x 2. load r2, @y 3. add r3, r1, r2 4. mult r4, r1, r2 5. add r1, r3, 1 6. add r2, r4, r3 7. sub r3, r2, r4 8. mult r2, r1, r3 b) First build the precedence graph: 9 depends on 3 and 1 8 depends on 3 7 depends on 5 6 depends on 5 and 3 5 depends on 4 3 depends on 2 Then assign weights based on latencies and maximal sum of latencies to exit: 1,2,4 have a weight of 4 3,5 have a weight of 3 9 has a weight of 2 6,7,8 have a weight of 1 We schedule an instruction that is available, starting with those with highest delay (in case of a tie we choose randomly - one could think of other ways of resolving ties too see next): Cycle1 1 2 Cycle2 4 3 Cycle3 5 9 Cycle 4 nop nop Cycle5 6 7 Cycle6 8 Page 5 of 6

COMP36512 However, if we resolve ties by considering, say, the number of descendants, then: Cycle1 4 2 Cycle2 1 5 Cycle3 3 nop Cycle4 9 6 Cycle5 8 7 c) Multiple basic blocks can be scheduled by drawing the dependence graph of each basic block, assigning weights as usual and then picking nodes based on their weight and regardless of the fact that they belong to different graphs. d) The availability of parallelism in a basic block (hence no loops, etc) will depend on the number of instructions that can be executed in parallel. This information can be found from the dependence graph, looking for the largest number of independent tasks. (3 marks) Page 6 of 6