Intermediate Representations

Similar documents
Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University have explicit

Intermediate Representations

Intermediate Representations

Intermediate Representations

CS415 Compilers. Intermediate Represeation & Code Generation

Intermediate Representations & Symbol Tables

opt. front end produce an intermediate representation optimizer transforms the code in IR form into an back end transforms the code in IR form into

Intermediate Representations Part II

CSE P 501 Compilers. Intermediate Representations Hal Perkins Spring UW CSE P 501 Spring 2018 G-1

CSE 401/M501 Compilers

Alternatives for semantic processing

Front End. Hwansoo Han

Local Optimization: Value Numbering The Desert Island Optimization. Comp 412 COMP 412 FALL Chapter 8 in EaC2e. target code

Instruction Selection: Preliminaries. Comp 412

Computing Inside The Parser Syntax-Directed Translation, II. Comp 412 COMP 412 FALL Chapter 4 in EaC2e. source code. IR IR target.

Intermediate Representation (IR)

Computing Inside The Parser Syntax-Directed Translation, II. Comp 412

Principles of Compiler Design

Prof. Mohamed Hamada Software Engineering Lab. The University of Aizu Japan

CS 406/534 Compiler Construction Intermediate Representation and Procedure Abstraction

Code Shape II Expressions & Assignment

Code Shape Comp 412 COMP 412 FALL Chapters 4, 5, 6 & 7 in EaC2e. source code. IR IR target. code. Front End Optimizer Back End

Parsing II Top-down parsing. Comp 412

The View from 35,000 Feet

The Software Stack: From Assembly Language to Machine Code

Computing Inside The Parser Syntax-Directed Translation. Comp 412 COMP 412 FALL Chapter 4 in EaC2e. source code. IR IR target.

Acknowledgement. CS Compiler Design. Intermediate representations. Intermediate representations. Semantic Analysis - IR Generation

Building a Parser Part III

Generating Code for Assignment Statements back to work. Comp 412 COMP 412 FALL Chapters 4, 6 & 7 in EaC2e. source code. IR IR target.

Computing Inside The Parser Syntax-Directed Translation. Comp 412

Compilers. Intermediate representations and code generation. Yannis Smaragdakis, U. Athens (original slides by Sam

Intermediate Code Generation

Lexical Analysis. Note by Baris Aktemur: Our slides are adapted from Cooper and Torczon s slides that they prepared for COMP 412 at Rice.

Implementing Control Flow Constructs Comp 412

Syntax-directed translation. Context-sensitive analysis. What context-sensitive questions might the compiler ask?

6. Intermediate Representation

A Simple Syntax-Directed Translator

Naming in OOLs and Storage Layout Comp 412

Semantic Analysis computes additional information related to the meaning of the program once the syntactic structure is known.

COMP 181. Prelude. Intermediate representations. Today. High-level IR. Types of IRs. Intermediate representations and code generation

Parsing III. CS434 Lecture 8 Spring 2005 Department of Computer Science University of Alabama Joel Jones

Compiler Design. Code Shaping. Hwansoo Han

CS5363 Final Review. cs5363 1

COMP-421 Compiler Design. Presented by Dr Ioanna Dionysiou

Procedure and Function Calls, Part II. Comp 412 COMP 412 FALL Chapter 6 in EaC2e. target code. source code Front End Optimizer Back End

6. Intermediate Representation!

Syntax Analysis, V Bottom-up Parsing & The Magic of Handles Comp 412

CS415 Compilers. Lexical Analysis

CS 406/534 Compiler Construction Putting It All Together

Compilers. Lecture 2 Overview. (original slides by Sam

Arrays and Functions

Introduction to Compiler

Lexical Analysis - An Introduction. Lecture 4 Spring 2005 Department of Computer Science University of Alabama Joel Jones

COMP 181 Compilers. Administrative. Last time. Prelude. Compilation strategy. Translation strategy. Lecture 2 Overview

CS415 Compilers Overview of the Course. These slides are based on slides copyrighted by Keith Cooper, Ken Kennedy & Linda Torczon at Rice University

Instruction Selection, II Tree-pattern matching

Parsing. Note by Baris Aktemur: Our slides are adapted from Cooper and Torczon s slides that they prepared for COMP 412 at Rice.

Register Allocation. Note by Baris Aktemur: Our slides are adapted from Cooper and Torczon s slides that they prepared for COMP 412 at Rice.

CS415 Compilers Register Allocation. These slides are based on slides copyrighted by Keith Cooper, Ken Kennedy & Linda Torczon at Rice University

Introduction to Optimization Local Value Numbering

Intermediate representation

Optimizer. Defining and preserving the meaning of the program

Code Shape Comp 412 COMP 412 FALL Chapters 4, 5, 6 & 7 in EaC2e. source code. IR IR target. code. Front End Optimizer Back End

CS415 Compilers. Syntax Analysis. These slides are based on slides copyrighted by Keith Cooper, Ken Kennedy & Linda Torczon at Rice University

Intermediate Representations

Overview of a Compiler

Compiling Techniques

Compiler Optimisation

COP4020 Programming Languages. Compilers and Interpreters Robert van Engelen & Chris Lacher

What is a compiler? var a var b mov 3 a mov 4 r1 cmpi a r1 jge l_e mov 2 b jmp l_d l_e: mov 3 b l_d: ;done

PRINCIPLES OF COMPILER DESIGN UNIT I INTRODUCTION TO COMPILERS

Data Structures and Algorithms in Compiler Optimization. Comp314 Lecture Dave Peixotto

CS 4201 Compilers 2014/2015 Handout: Lab 1

Runtime Support for Algol-Like Languages Comp 412

Just-In-Time Compilers & Runtime Optimizers

Local Register Allocation (critical content for Lab 2) Comp 412

Context-sensitive analysis. Semantic Processing. Alternatives for semantic processing. Context-sensitive analysis

Intermediate Code & Local Optimizations

EECS 6083 Intro to Parsing Context Free Grammars

Semantic actions for declarations and expressions

Compiler Code Generation COMP360

tokens parser 1. postfix notation, 2. graphical representation (such as syntax tree or dag), 3. three address code

Runtime Support for OOLs Object Records, Code Vectors, Inheritance Comp 412

Semantic actions for declarations and expressions. Monday, September 28, 15

Code Merge. Flow Analysis. bookkeeping

Lecture 12: Compiler Backend

Comp 204: Computer Systems and Their Implementation. Lecture 22: Code Generation and Optimisation

Compiler Optimization Intermediate Representation

CS 132 Compiler Construction

Syntax Analysis, III Comp 412

CS606- compiler instruction Solved MCQS From Midterm Papers

9/5/17. The Design and Implementation of Programming Languages. Compilation. Interpretation. Compilation vs. Interpretation. Hybrid Implementation

Introduction to Parsing

Syntax Analysis, VII One more LR(1) example, plus some more stuff. Comp 412 COMP 412 FALL Chapter 3 in EaC2e. target code.

Introduction to Optimization, Instruction Selection and Scheduling, and Register Allocation

The Processor Memory Hierarchy

Semantic analysis and intermediate representations. Which methods / formalisms are used in the various phases during the analysis?

Introduction to Parsing. Comp 412

Compiling and Interpreting Programming. Overview of Compilers and Interpreters

Syntax Analysis, VI Examples from LR Parsing. Comp 412

Transcription:

COMP 506 Rice University Spring 2018 Intermediate Representations source code IR Front End Optimizer Back End IR target code Copyright 2018, Keith D. Cooper & Linda Torczon, all rights reserved. Students enrolled in Comp 506 at Rice University have explicit permission to make copies of these materials for their personal use. Faculty from other educational institutions may use these materials for nonprofit educational purposes, provided this copyright notice is preserved Chapter 5 in EaC2e.

Intermediate Representations AHSDT refers to the actions invoked by reductions in an LR(1) parser source code IR Front End Optimizer Back End IR target code stream of characters stream of tokens Scanner microsyntax Parser syntax Semantic Elaboration IR annotations Calls made in AHSDT Front End Parser & Semantic Elaborator emit IR for the rest of the compiler to use Front end works from the syntax of the source code Parser & Semantic Elaborator emit IR using AHSDT or similar technique Rest of the compiler works from IR Analyzes IR to learn about the code Transforms IR to improve final code IR determines what a compiler can do to the code Can only manipulate details that are represented in the IR COMP 506, Spring 2018 2

Intermediate Representations source code IR Front End Optimizer Back End IR target code IR is the vehicle that carries information between phases Front end: produces an intermediate representation (IR) Optimizer: transforms the IR into an equivalent IR that runs faster Each pass reads and writes IR Back end: transforms the IR into native code IR determines both the compiler s ambition & its chances for success The compiler s knowledge of the code is encoded in the IR The compiler can only manipulate facts that are represented by the IR COMP 506, Spring 2018 3

Intermediate Representations Decisions in IR design affect the speed and efficiency of the compiler Some important IR properties Ease of generation Ease of manipulation Cost of manipulation Procedure size Expressiveness Level of abstraction The importance of different properties varies between compilers Selecting an appropriate IR and implementing it well is crucial COMP 506, Spring 2018 4

Intermediate Representations Decisions in IR design affect the speed and efficiency of the compiler Some important IR properties Ease of generation Ease of manipulation Cost of manipulation Procedure size Expressiveness Level of abstraction Example Ease of Generation: To optimize code, the compiler must understand the flow of control: loops, if-then-else, & case statements, which are typically represented in a control-flow graph (CFG). A CFG has a node for each basic block 1 and edges to represent flow of control. To generate a CFG, however, the compiler needs to understand where blocks begin & end, and see both the source and sink of each edge. Generating a CFG on the fly is hard; most compilers need a pass over the code, in some form of IR, before they build a CFG. The importance of different properties varies between compilers Selecting an appropriate IR and implementing it well is crucial 1 A basic block is a maximal-length sequence of straight-line code. COMP 506, Spring 2018 5

Intermediate Representations Decisions in IR design affect the speed and efficiency of the compiler Some important IR properties Ease of generation Ease of manipulation Cost of manipulation Procedure size Expressiveness Level of abstraction Example Ease of Manipulation: In a Java code that has a list of instructions, you might choose to use Java s ArrayList class. ArrayList provides many useful features. Cost of Manipulation: ArrayList is efficient, unless you are inserting items into the middle of a long list, as occurs in many optimizations. Impact on the Compiler: if the compiler needs to reorder code or insert code, the costs can go quadratic due to the implementation underlying ArrayList. The importance of different properties varies between compilers Selecting an appropriate IR and implementing it well is crucial COMP 506, Spring 2018 6

Intermediate Representations Decisions in IR design affect the speed and efficiency of the compiler Some important IR properties Ease of generation Ease of manipulation Cost of manipulation Procedure size Expressiveness Level of abstraction Example Expressiveness: a compiler with a near-source level IR may have trouble representing the results of some optimization. for i = 0 to n for j = 0 to m a[i,j] = 0 Simple initialization p = &a[0,0] for i = 0 to n*m *p++ = 0 After OSR The implementation of operator strength reduction (OSR) can only produce this result if the IR can represent *p++. ( 10.7.2 in EaC2e.) The importance of different properties varies between compilers Selecting an appropriate IR and implementing it well is crucial COMP 506, Spring 2018 7

Intermediate Representations Decisions in IR design affect the speed and efficiency of the compiler Some important IR properties Ease of generation Ease of manipulation Cost of manipulation Procedure size Expressiveness Level of abstraction Example Level of Abstraction: copying a string is a complex operation that involves an internal loop. Explicitly representing the loop and its details exposes all those details to uniform optimization (a good thing). Explicitly representing the loop makes it difficult to move the copy to another location in the code moving control flow constructs is difficult, at best. Representing the copy as a single operation, like the S370 mvcl makes it easy to move. The importance of different properties varies between compilers Selecting an appropriate IR and implementing it well is crucial COMP 506, Spring 2018 8

Taxonomy of Intermediate Representations More on the level of abstraction Is the IR closer in abstraction to the source or the machine? High level closer to source Low level closer to machine The level of exposed detail influences the feasibility and profitability of different code optimizations Example: A[i,j] subscript A i j High-level AST Good for memory disambiguation loadi 1 => r 1 sub r j, r 1 => r 2 loadi 10 => r 3 mult r 2, r 3 => r 4 sub r i, r 1 => r 5 add r 4, r 5 => r 6 loadi @A => r 7 add r 7, r 6 => r 8 load r 8 => r Aij Low-level linear code Good for address calculation Source code Machine code COMP 506, Spring 2018 9 High level Low level

Level of Abstraction Do not confuse graph versus code with high level versus low level Last slide, the tree was high level and the code was low level This situation could easily be reversed load + + @A Source code High level * loadarray A,i,j - 10 - j 1 j 1 Low level Low-level AST High-level linear code Machine code COMP In instruction 506, Spring selection, 2018 compilers often use trees that have a lower level of abstraction than machine code. 10

Taxonomy of Intermediate Representations Three major categories of IR Structural IRs Graphs and trees Widely used in source-to-source translators Tend to use large amounts of memory Linear IRs Pseudo-code for an abstract machine Simple compact data structures Can be easy to reorder and rearrange Examples: Trees, directed acyclic graphs, interference graph (register allocation) Examples: Three address code (ILOC), stack machine code (Java bytecode), register-transfer language (RTL) Hybrid IRs Combinations of graphs and linear code Provide some of the advantages of both structural & linear IRs Examples: Control-flow graph, SSA form COMP 506, Spring 2018 11

Taxonomy of Intermediate Representations Three major categories of IR Structural IRs Graphs and trees Widely used in source-to-source translators Tend to use large amounts of memory Linear IRs Pseudo-code for an abstract machine Simple compact data structures Can be easy to reorder and rearrange Examples: Trees, directed acyclic graphs, interference graph (register allocation) Examples: Three address code (ILOC), stack machine code (Java bytecode), register-transfer language (RTL) Hybrid IRs Combinations of graphs and linear code Provide some of the advantages of both structural & linear IRs Examples: Control-flow graph, SSA form COMP 506, Spring 2018 12

Structural IRs Parse Tree A parse tree, or syntax tree, represents the input program s derivation Interior node for each nonterminal symbol in the derivation Leaf node for each word (terminal symbol in the derivation) Simple inorder treewalk will reproduce the original syntax A parse tree captures every detail of the input program, but expands none of them. What you see is what you get What you see is all you have Expr <id,x> Expr Op minus Parse tree for x 2 * y See slides on parsing Expr Expr Op Expr <num,2> times <id,y> COMP 506, Spring 2018 13

Structural IRs Abstract Syntax Tree An abstract syntax tree (AST) is the parse tree with the nodes for most nonterminal symbols removed Many fewer nodes (about 50%) Simpler structure Little information is lost, compared to the parse tree 1 Can represent AST with a graph (i.e., nodes and edges) Can represent AST in a linear form Easier to manipulate than pointers 2 y * x - in postfix form - x * 2 y in prefix form - S-expressions (scheme, lisp) are (essentially) ASTs x - * 2 y AST for x 2 * y Some compilers build ASTs as an initial IR. In practice, ASTs tend to be large not because they must be, but because they grow through the addition of more and more fields on each node and edge. See the digression on page 228 in EaC2e 1 The lost information concerns the derivation, not the meaning. COMP Knowledge 506, Spring about 2018 the meaning is encoded in the structure of the AST. 14

Structural IRs Directed Acyclic Graph A directed acyclic graph (DAG) is an AST with a unique node for each value DAG makes sharing explicit DAG directly encodes redundancy DAG saves a little space DAG has no obvious linear form Building a DAG Requires analysis to recognize redundancy Can consider hash consing, a scheme that folds a hash-test into node creation and returns the earlier result, if one exists Can consider an algorithm similar to value numbering In a linear IR, sharing is encoded in the name space of the values. z If the compiler knows that it has two copies of the same expression, it may be able to arrange the code to evaluate that redundant expression only once. DAGs were popular in the 70s due to memory constraints. Hash consing in LISP and Scheme systems also simplified equality tests. COMP 506, Spring 2018 15 x - 2 * w y z x - 2 * y w 2 * y

Structural IRs Implementing Trees In many contexts, we teach students to create purpose-built trees Node and edge are classes Nodes of various types and arities Allocating nodes of different sizes is an issue for the system (inefficient but functional) Knuth showed that you can map any arbitrary tree onto a binary tree Allocate uniform-size, binary nodes Two pointers : child and sibling Simplify allocation and traversal (See 2.3.2 in Knuth Vol 1 or B.3 in EaC2e) for for stmt list i 1 n 2 loop body next i 1 n 2 stmt list next loop body Tree implementations often become complex & bloated. Knuth s mapping trick lets the compiler writer stick with a single format for nodes, which simplifies allocation and helps keep things simple. (Informal survey in early 90s showed about 1KB per line in FORTRAN ASTs!) COMP 506, Spring 2018 16

Structural IRs Control-Flow Graph Models the transfers of control in the procedure Nodes in the graph are basic blocks Can be represented with quads or any other linear representation Edges in the graph represent control flow Example if (x = y) a 2 b 5 a 3 b 4 c a * b Implementations: See Figures B.3 and B.4 in Appendix B of EaC2e COMP 506, Spring 2018 17

Structural IRs Control-Flow Graph Models the transfers of control in the procedure Nodes in the graph are basic blocks Can be represented with quads or any other linear representation Edges in the graph represent control flow Example a 2 b 5 if (x = y) a 3 b 4 Each basic block, a maximallength sequence of straightline code, is represented by a distinct node. c a * b Implementations: See Figures B.3 and B.4 in Appendix B of EaC2e COMP 506, Spring 2018 18

Structural IRs Control-Flow Graph Models the transfers of control in the procedure Nodes in the graph are basic blocks Can be represented with quads or any other linear representation Edges in the graph represent control flow Example a 2 b 5 if (x = y) c a * b a 3 b 4 Control-flow transfers (e.g., branches and jumps) are represented as edges between the nodes. Note that we cannot place the edges until we see their targets, making it difficult to generate a CFG straight out of the parser (see Slide 5). Implementations: See Figures B.3 and B.4 in Appendix B of EaC2e COMP 506, Spring 2018 19

Taxonomy of Intermediate Representations Three major categories of IR Structural IRs Graphs and trees Widely used in source-to-source translators Tend to use large amounts of memory Linear IRs Pseudo-code for an abstract machine Simple compact data structures Can be easy to reorder and rearrange Examples: Trees, directed acyclic graphs, interference graph (register allocation) Examples: Three address code (ILOC), stack machine code (Java bytecode), register-transfer language (RTL) Hybrid IRs Combinations of graphs and linear code Provide some of the advantages of both structural & linear IRs Examples: Control-flow graph, SSA form COMP 506, Spring 2018 20

Linear IRs Stack Machine Code (or One-Address Code) Originally used for stack-based computers, now used as an IR Example x - 2 * y becomes push 2 push y Multiply push x subtract Advantages Compact form Introduced names are implicit, not explicit Simple to generate and execute code Implicit names occupy no space Useful where code is transmitted over slow communication links or where memory is limited Java bytecode is essentially a stack machine code It was designed for transmission over slow links (bytecode) COMP 506, Spring 2018 21

Linear IRs Three-Address Code Three-address code is modeled on assembly code In general, three-address code has the form x y op z with at most one operator (op) and three names (x, y, & z) Three-address code introduces a new name space Example z x - 2 * y becomes t 2 * y z x - t Compiler-generated temporary names Advantages: Resembles many real machines Introduces a new set of names * Compact form COMP 506, Spring 2018 22

Linear IRs Three-Address Code Quadruples: A straightforward implementation of three-address code Use a table of k x 4 small integers Simple record structure Instantiate the explicit names Easy, albeit slow, to reorder Opcode Op 1 Op 2 Op 3 load r1, y loadi r2, 2 mult r3, r2, r1 load r4, x sub r5, r4, r3 RISC assembly code load 1 y loadi 2 2 mult 3 2 1 load 4 x sub 5 4 3 Quadruples Map opcodes into integers Other fields interpreted based on COMP 506, Spring 2018 position and opcode 23

Linear IRs Three-Address Code An updated version of quadruples Represent each operation as a 1 x 4 array of integers Types of operands are known contextually from opcode Link together the operations in a (doubly) linked list Simple record structure Easy to insert, delete, & reorder For efficiency, may want to build a custom allocator for the operation structures Get them 100 or 1,000 at a time and dole them out with a cheap macro or inline call. TOP load 1 y loadi 2 2 mult 3 2 1 load 4 x sub 5 4 3 BOTTOM List of Quadruples COMP 506, Spring 2018 24

Taxonomy of Intermediate Representations Three major categories of IR Structural IRs Graphs and trees Widely used in source-to-source translators Tend to use large amounts of memory Linear IRs Pseudo-code for an abstract machine Simple compact data structures Can be easy to reorder and rearrange Examples: Trees, directed acyclic graphs, interference graph (register allocation) Examples: Three address code (ILOC), stack machine code (Java bytecode), register-transfer language (RTL) Hybrid IRs Combinations of graphs and linear code Provide some of the advantages of both structural & linear IRs Examples: Control-flow graph, SSA form COMP 506, Spring 2018 25

Hybrid IRs Control-Flow Graph, Revisited The control-flow graph is, almost always, a hybrid that combines two distinct representations: a graph for control and a linear IR for operations Graph navigation where helpful Linear IR for the blocks Explicit & obvious control flow Explicit ordering for optimization and code generation Example, Revisited if (x = y) then a 2 b 5 else a 3 b 4 c a * b B 2 B 1 B 4 B 3 B 1 : If (x = y) goto B 1 or B 2 B 2 : a 2 b 5 B 3 : a 3 b 4 B 4 : c a * b Source Code The Graph The Individual Blocks COMP 506, Spring 2018 26

Hybrid IRs Static Single-Assignment Form Cocktail-party knowledge: CLANG uses SSA. Building SSA is complex; we will spend 2 lectures on it (later) The Main Idea: each name is defined by exactly one operation The Problem: at an operation, need one name for each operand The Solution: introduce artificial functions, called f-functions to unify names Original Code Code in SSA-form x y while (x < k) x x + 1 y y + x Strengths of SSA Form Sharper analysis better code (sometimes) faster algorithms f-functions mark points where values merge x 0 y 0 if (x 0 >= k) goto next loop: x 1 f(x 0,x 2 ) y 1 f(y 0,y 2 ) x 2 x 1 + 1 y 2 y 1 + x 2 if (x 2 < k) goto loop next: SSA Form is often interpreted as a graph, with edges running from a definition to a use COMP 506, Spring 2018 27

Using Multiple Representations source code Front End IR 1 IR 2 IR 3 target code Optimizer Optimizer Back End Many recent compilers use multiple representations gcc uses two levels: high-level (GIMPLE) and low-level (RTL) clang uses two levels: llvm code and a non-ssa version in code generation Open 64 uses 5 distinct levels of WHIRL Repeatedly lower the level of abstraction and optimize Introduce more detail at each lowering Optimizations at different levels target different effects Compilers are good at lowering and elaborating Strategy allows each optimization to see an appropriate IR COMP 506, Spring 2018 28

And There is More A compiler s IR also includes several tables and maps Symbol table Maps names (strings) to small integers Stores type and location information with the name Fast access to annotations based on either name or number Enter the name in the scanner and convert it to a small integer Constant table A place to collect all of the code s literal constants and their types Used to generate a static data area the constant pool Constructed on-the-fly as constants are encountered Storage map Shows where each name is stored Memory locations, virtual registers, or implicit (literal constants in a loadi) Typically constructed between parsing the declarations and the executables We will return to these tables in later lectures COMP 506, Spring 2018 29

A Compiler s Symbol Table The symbol table has an entry for every name that the compiler knows Variables, labels, procedures, classes, system calls, The entry for a name has a bunch of associated fields Lexeme, type, storage class, address, length, volatility,... h(x) Name Type Addr Length Storage Class Volatile y int 0 4 local no w Char * IN VR 1 4 local no x char 8 12 static no z float 4 4 global yes The compiler can use a string s table index as a name for x. COMP These short 506, Spring names 2018 have uniform length and are easy to compare and test. 30

Digression (or Rant) The role of string data in a compiler s intermediate representation Principle For the sake of compactness and efficiency, the compiler should almost never store a string value in the IR Strings are expensive to compare Strings are large relative to their information content In most cases, we need just one copy of a string How to handle strings in your IR Use a hash table to convert each string to a small integer In the scanner pass the lexemes as table indices (!) Represent the string with the integer Compare strings as integers COMP 506, Spring 2018 31

A Compiler s Symbol Table In either an Algol-like language (ALL) or an Object-Oriented Language (OOL), the compiler must represent names at multiple scopes An ALL has lexical scopes; an OOL, has several kinds of scopes A simple and effective way to build scoped tables is to chain symbol tables One table per scope Lookup fails its way up the chain Symbol Table Pointer Level k - 2 Level k - 1 Level k COMP Sheaves 506, of Spring tables 2018 implementation 32

A Compiler s Symbol Tables In an OOL, the compiler must keep multiple tables Lexical Hierarchy Class Hierarchy Global Scope Search Order: lexical, class, global COMP Multiple 506, sheaves Spring of 2018 tables 33