Abstract Data Types. Stack ADT

Similar documents
Abstract Data Types. Stack ADT. Stack Model LIFO

Topics. Trees Vojislav Kecman. Which graphs are trees? Terminology. Terminology Trees as Models Some Tree Theorems Applications of Trees CMSC 302

12 Abstract Data Types

Data Structures and Algorithms

CSCI 136 Data Structures & Advanced Programming. Lecture 14 Fall 2018 Instructor: Bills

Lecture Notes. char myarray [ ] = {0, 0, 0, 0, 0 } ; The memory diagram associated with the array can be drawn like this

Chapter 14. Graphs Pearson Addison-Wesley. All rights reserved 14 A-1

CS490: Problem Solving in Computer Science Lecture 6: Introductory Graph Theory

Multidimensional Arrays & Graphs. CMSC 420: Lecture 3

Data Structure. IBPS SO (IT- Officer) Exam 2017

Lists, Stacks, and Queues. (Lists, Stacks, and Queues ) Data Structures and Programming Spring / 50

Abstract Data Types 1

Trees. Arash Rafiey. 20 October, 2015

EC8393FUNDAMENTALS OF DATA STRUCTURES IN C Unit 3

Problem with Scanning an Infix Expression

Abstract Data Types. CptS 223 Advanced Data Structures. Larry Holder School of Electrical Engineering and Computer Science Washington State University

Announcements. HW3 is graded. Average is 81%

March 13/2003 Jayakanth Srinivasan,

Algorithm Design and Analysis

Heap, Variables, References, and Garbage. CS152. Chris Pollett. Oct. 13, 2008.

Trees. 3. (Minimally Connected) G is connected and deleting any of its edges gives rise to a disconnected graph.

1. true / false By a compiler we mean a program that translates to code that will run natively on some machine.

INSTITUTE OF AERONAUTICAL ENGINEERING

Scribes: Romil Verma, Juliana Cook (2015), Virginia Williams, Date: May 1, 2017 and Seth Hildick-Smith (2016), G. Valiant (2017), M.

Graph Algorithms. Chapter 22. CPTR 430 Algorithms Graph Algorithms 1

DATA STRUCTURE : A MCQ QUESTION SET Code : RBMCQ0305

STACKS AND QUEUES. Problem Solving with Computers-II

Undirected Graphs. V = { 1, 2, 3, 4, 5, 6, 7, 8 } E = { 1-2, 1-3, 2-3, 2-4, 2-5, 3-5, 3-7, 3-8, 4-5, 5-6 } n = 8 m = 11

Algorithms: Lecture 10. Chalmers University of Technology

W4231: Analysis of Algorithms

Chapter 9 STACK, QUEUE

Abstract Data Types 1

An Introduction to Trees

-The Hacker's Dictionary. Friedrich L. Bauer German computer scientist who proposed "stack method of expression evaluation" in 1955.

ABSTRACT DATA TYPES (ADTS) COMP1927 Computing 2 16x1 Sedgewick Chapter 4

Stack. 4. In Stack all Operations such as Insertion and Deletion are permitted at only one end. Size of the Stack 6. Maximum Value of Stack Top 5

CS 270 Algorithms. Oliver Kullmann. Binary search. Lists. Background: Pointers. Trees. Implementing rooted trees. Tutorial

CPSC 221: Algorithms and Data Structures Lecture #1: Stacks and Queues

CS302 Data Structures using C++

Lecture 24 Search in Graphs

Arrays and Linked Lists

CMPSCI 250: Introduction to Computation. Lecture #14: Induction and Recursion (Still More Induction) David Mix Barrington 14 March 2013

Short Notes of CS201

Matching Theory. Figure 1: Is this graph bipartite?

CS201 - Introduction to Programming Glossary By

IV. Stacks. A. Introduction 1. Consider the 4 problems on pp (1) Model the discard pile in a card game. (2) Model a railroad switching yard

Graph Algorithms. Imran Rashid. Jan 16, University of Washington

Depth-First Search Depth-first search (DFS) is another way to traverse the graph.

CS510 \ Lecture Ariel Stolerman

DEEPIKA KAMBOJ UNIT 2. What is Stack?

ADVANCED DATA STRUCTURES USING C++ ( MT-CSE-110 )

CSE 417: Algorithms and Computational Complexity. 3.1 Basic Definitions and Applications. Goals. Chapter 3. Winter 2012 Graphs and Graph Algorithms

Discrete Mathematics and Probability Theory Fall 2013 Vazirani Note 7

DATA STRUCTURES AND ALGORITHMS

Programming Languages Third Edition. Chapter 7 Basic Semantics

Linked List Implementation of Queues

CS 220: Discrete Structures and their Applications. graphs zybooks chapter 10

implementing the breadth-first search algorithm implementing the depth-first search algorithm

Copyright 2000, Kevin Wayne 1

A6-R3: DATA STRUCTURE THROUGH C LANGUAGE

3.1 Basic Definitions and Applications

Lecture 24 Notes Search in Graphs

CS301 - Data Structures Glossary By

1 The Shortest Path Problem

Stacks. stacks of dishes or trays in a cafeteria. Last In First Out discipline (LIFO)

Preface... (vii) CHAPTER 1 INTRODUCTION TO COMPUTERS

Lecture 1. 1 Notation

Discrete Mathematics for CS Spring 2008 David Wagner Note 13. An Introduction to Graphs

Postfix (and prefix) notation

Algorithm Design and Analysis

Lecture 3: Graphs and flows

Stacks, Queues (cont d)

Byzantine Consensus in Directed Graphs

COURSE: DATA STRUCTURES USING C & C++ CODE: 05BMCAR17161 CREDITS: 05

CS 161 Lecture 11 BFS, Dijkstra s algorithm Jessica Su (some parts copied from CLRS) 1 Review

Section Summary. Introduction to Trees Rooted Trees Trees as Models Properties of Trees

Minimum Spanning Trees Ch 23 Traversing graphs

Some Extra Information on Graph Search

12/5/17. trees. CS 220: Discrete Structures and their Applications. Trees Chapter 11 in zybooks. rooted trees. rooted trees

CSE 373 MAY 10 TH SPANNING TREES AND UNION FIND

infix expressions (review)

Elementary Graph Algorithms. Ref: Chapter 22 of the text by Cormen et al. Representing a graph:

Graphs. The ultimate data structure. graphs 1

Introduction. Problem Solving on Computer. Data Structures (collection of data and relationships) Algorithms

LIFO : Last In First Out

COSC160: Data Structures: Lists and Queues. Jeremy Bolton, PhD Assistant Teaching Professor

lecture27: Graph Traversals

CSE 331: Introduction to Algorithm Analysis and Design Graphs

CS302 - Data Structures using C++

γ(ɛ) (a, b) (a, d) (d, a) (a, b) (c, d) (d, d) (e, e) (e, a) (e, e) (a) Draw a picture of G.

22.3 Depth First Search

LECTURE 17 GRAPH TRAVERSALS

Priority Queues. 1 Introduction. 2 Naïve Implementations. CSci 335 Software Design and Analysis III Chapter 6 Priority Queues. Prof.

Graphs and trees come up everywhere. We can view the internet as a graph (in many ways) Web search views web pages as a graph

Module 11: Additional Topics Graph Theory and Applications

CSCI-1200 Data Structures Spring 2018 Lecture 7 Order Notation & Basic Recursion

Subprograms, Subroutines, and Functions

CS 125 Section #6 Graph Traversal and Linear Programs 10/13/14

CSE 100: GRAPH ALGORITHMS

Lecture 5: Graphs. Rajat Mittal. IIT Kanpur

Transcription:

Abstract Data Types Data abstraction consepts have been introduced in previous courses, for example, in Chapter 7 of [Deitel]. The Abstract Data Type makes data abstraction more formal. In essence, an abstract data type (ADT) consists of the following: A collection of data A set of operations on the data or subsets of the data A set of axioms, or rules of behavior governing the interaction of operations The ADT concept is similar and related to the classical systems of mathematics, where an entity (such as a number system, group, or field) is defined in terms of operations and axioms. The ADT is also closely related to set theory, and in fact could be considered a variant of set theory. The concepts of vector, list, and deque, introduced in earlier chapters, could also be described as ADTs. Before proceeding to our primary examples of ADTs, stack and queue, let's clarify the distinction between a data structure and an ADT. A data structure is an implementation. For example, Vector, List, and Deque are data structures. An abstract data type is a system described in terms of its behavior, without regard to its implementation. The data structures Vector, List, and Deque implement the ADTs vector, list, and deque, respectively. We return to the discussion of these ADTs later in this chapter. Within the context of C++ (or most other object-oriented languages such as Java and Smalltalk), an abstract data type can be defined as a class public interface with the ADT axioms stated as behavior requirements. A data structure in this context is a fully implemented class, whether or not explicit requirements have been placed on its behavior -- it is what it is. Of course, we often do place requirements on behavior prior to implementation, as we did for Vector, List, and Deque. The C++ language standard, in fact, places behavior and performance requirements on all of the classes in the STL. Both of these examples (our class, and the C++ STL) have thus integrated ADT concepts into the software development process: start with an ADT and then design a data structure that implements it. Conversely, any proper class can in principle be analyzed, axiomatized, and used to define a corresponding ADT, which the class implements. In this chapter we will concentrate on the stack and queue ADTs: definition, properties, uses, and implementations. The stack and queue are arguably the most important, and certainly the most often encountered, ADTs in computer science. A final comment is critically important: without the third component, the definition of ADT is vacuous of significance. For example, note that the choices of operation names are inherently meaningless, and without axioms they have no constraints on their behavior. When you see the definitions of statck and queue, a little analysis should convince you that it is only the axioms that give meaning to, and distinguish between, the two concepts. Stack ADT The stack ADT operates on a collection comprised of elements of any proper type T and, like most ADTs, inserts, removes, and manipulates data items from the collection. The stack operations are traditionally named as follows: void push (T t) void pop () T top () bool empty () unsigned int size () constructor and destructor http://www.cs.fsu.edu/~lacher/courses/cop3330/lectures/adt_intro/script.html#link1 1/11

The stack ADT has the following axioms: 1. S.size(), S.empty(), S.push(t) are always defined 2. S.pop() and S.top() are defined iff S.empty() is false 3. S.empty(), S.size(), S.top() do not change S 4. S.empty() is true iff 0 = S.size() 5. S.push(t) followed by S.pop() leaves S unchanged 6. after S.push(t), S.top() returns t 7. S.push(t) increases S.size() by 1 8. S.pop() decreases S.size() by 1 These axioms can be used to prove, among other things, that the stack is unique up to isomorphism: Given any two stacks on the same data type, there is a one-to-one correspondence between their elements that respects the stack operations. Another way to state this fact is that the only possible difference between two stacks on the same data type is changes in terminology; the essential functionalities are identical. Stack Model -- LIFO The stack models the behavior called "LIFO", which stands for "last-in, first-out", the behavior described by axioms 5 and 6. The element most recently pushed onto the stack is the one that is at the top of the stack and is the element that is removed by the next pop operation. The slide illustrates a stack starting empty (as all stacks begin their existence) and follows the stack through three push operations and one pop operation. Derivable Behaviors (Theorems) For the following theorems, assume that S is a stack of elements of type T. Theorem. If (n = S.size()) is followed by k push operations then n + k = S.size(). Proof. Note that stack axiom 7 states the theorem in the case k = 1. Suppose that the theorem is true for the case k. Then, if we push k + 1 elements onto S, axiom 7 again states that the size is increased by one from the size after k pushes, that is, from (n + k) to (n + k) + 1 = n + k + 1, completing the induction step. Therefore the theorem is proved, by the principle of mathematical induction. Corollary: The size of a stack is non-negative. Proof. If the size of S is n < 0, then we obtain a stack of size 0 by pushing -n elements onto S. But S is not empty by axiom 5. This contradicts axiom 4. Theorem. If (n = S.size()) is followed by k pop operations then n - k = S.size(). Proof. (Left as an exercise for the student.) Corollary. k <= n; that is, a stack of size n can be popped at most n times without intervening push operations. Proof. The size of a stack cannot be negative. Theorem. The last element of S pushed onto S is the top of S. Proof. Actually, this reiterates axiom 6. http://www.cs.fsu.edu/~lacher/courses/cop3330/lectures/adt_intro/script.html#link1 2/11

Theorem. S.pop() removes the last element of S pushed onto S. Proof. By axiom 5, S.pop() does not remove anything that was on the stack prior to the last push operation. The only other possibility is that it removes the item added by the preceding push. Theorem. Any two stacks of type T are isomorphic. This last result is beyond the scope of this course. The proof is not difficult, but it would require the development of considerable apparatus, such as defining isomorphism, and the end the result would be is difficult to appreciate, beyond the intuitive level, without dwelling longer on the various implications of isomorphism. Therefore we will skip the proof. Another point that should be addressed, if we were starting a course in the theory of ADTs, is the independence of the axioms. The axioms listed for ADT stack were assembled to make it easy to prove theorems and to accept, at least intuitively, that they characterize stack as a type. It is possible that one of these axioms could be derived from the others, however. Uses of ADT Stack Stacks (and queues) are ubiquitous in computing. Stacks, in particular, serve as a basic framework to manage general computational systems as well as in supports of specific algorithms. All modern computers have both hardware and software stacks, the former to manage calculations and the latter to manage runtime of languages such as C++. We now discuss five of the archetypical applications of the stack ADT. A widely used search process, depth first search (DFS), is implemented using a stack. The idea behind DFS, using the context of solving a maze, is to continue down paths until the way is blocked, then backtrack to the last unattempted branch path, and continue. The process ends when a goal is reached or there are no unsearched paths. A stack is used to keep track of the state of the search, pushing steps onto the stack as they are taken and popping them off during backtracking. All modern general-purpose computers have a hardware-supported stack for evaluating postfix expressions. To use the postfix evaluation stack effectively, computations must be converted from infix to postfix notation. This translation is part of modern compilers and interpreters and uses a software stack. C++ uses a runtime system that manages the memory requirements of an executing program. A stack, called the runtime stack, is used to organize the function calls in the program. When a function is called, an activation record is pushed onto the runtime stack. When the function returns, its activation record is popped. Among other things, this system makes it straightforward for a function to call itself -- just push another activation record onto the runtime stack. Stack-based runtime systems thus facilitate the use of recursion in programs. Queue ADT The queue ADT operates on a collection comprised of elements of any proper type T and, like most ADTs, inserts, removes, and manipulates data items from the collection. We will break from tradition and use the following names for queue operations: void push (T t) void pop () T front () bool empty () unsigned int size () http://www.cs.fsu.edu/~lacher/courses/cop3330/lectures/adt_intro/script.html#link1 3/11

constructor and destructor The queue ADT has the following axioms: 1. Q.size(), Q.empty(), Q.push(t) are always defined 2. Q.pop() and Q.front() are defined iff Q.empty() is false 3. Q.empty(), Q.size(), Q.front() do not change Q 4. Q.empty() is true iff 0 = Q.size() 5. Suppose n = Q.size() and the next element pushed onto Q is t; then, after n elements have been popped from Q, t = Q.front() 6. Q.push(t) increases Q.size() by 1 7. Q.pop() decreases Q.size() by 1 8. If t = Q.front() then Q.pop() removes t from Q These axioms can be used to prove, among other things, that the queue is unique up to isomorphism: Given any two queues on the same data type, there is a one-to-one correspondence between their elements that respects the queue operations. Another way to state this fact is that the only possible difference between two queues on the same data type is changes in terminology; the essential functionalities are identical. Queue Model -- FIFO The queue models the behavior called "FIFO", which stands for "first-in, first-out", the behavior described by axiom 5. The element that has been on the queue the longest is the one that is at front of the queue and is the element that is removed by the next pop operation. The slide illustrates a queue starting empty (as all queues begin their existence) and follows the queue through four push and three pop operations. Derivable Behaviors (Theorems) For the following theorems, assume that Q is a queue of elements of type T. Theorem. If (n = Q.size()) is followed by k push operations then n + k = Q.size(). Proof. Apply mathematical induction using axiom 6. Theorem. If (n = Q.size()) is followed by k pop operations then n - k = Q.size(). Proof. Apply mathematical induction using axiom 7. Corollary. k <= n; that is, a queue of size n can be popped at most n times without intervening push operations. Theorem. The first element pushed onto Q (of those still in Q) is the front of Q. Proof. Axiom 5 states that the youngest element on the queue requires size pop operations to be removed. It follows that all of the older elements are popped before the youngest. (The eskimo mafia rule: the older you are, the sooner you are popped.) Theorem. Q.pop() removes the front element of Q. Theorem. Any two queues of type T are isomorphic. Proof. (Beyond our scope.) http://www.cs.fsu.edu/~lacher/courses/cop3330/lectures/adt_intro/script.html#link1 4/11

Uses of ADT Queue Queues (and stacks) are ubiquitous in computing. Queues, in particular, support general computing as buffers (essential for almost any conceivable I/O), are central to algorithms such as breadth first search, and are a key concept in many simulations. Just as ADT stack is embedded in the hardware of modern CPUs to support evaluation of expressions, ADT queue, in the form of buffers, is an essential component to virtually all computerbased communication. Buffers facilitate transfer of data, making it unnecessary to synchronize send with recieve, without which every computation would be hopelessly I/O bound. ADT queue is a model of actual queues of people, jobs, and processes as well. Thus queues are a natural component of many simulation models of real processes, from fast food restaurents to computer operating systems. The BFS algorithm is detailed later in this chapter. Depth-First Search (DFS) -- Backtracking The DFS algorithm applies to any graph-like structure, including directed graphs, undirected graphs, trees, and mazes. The basic idea of DFS is "go deep" whenever possible but backtrack when necessary. GoDeep: If there is an unvisited neighbor (a location adjacent to the current location), go there. BackTrack: Retreat along the path to the most recently encountered location with an unvisited neighbor. The DFS algorithm initializes at the start location and repeats "if (possible) GoDeep else BackTrack" until there are no moves left or until the goal is encountered. Our version of DFS is single-goal oriented. Another version is oriented toward visiting every location, and consists of applying the goal-oriented version repeatedly at every unvisited location until all locations have been visited. See Chapter VI of [Cormen]. DFS -- Backtracking (cont.) A stack of locations is a convenient way to organize DFS. This slide depicts a setup for DFS consisting of a graph, a starting vertex, a goal vertex, and a stack. The contents of the stack during the run of the DFS algorithm are also shown. The following is a pseudo-code version of the stackbased DFS algorithm. Assumptions: G is a graph with vertices start and goal. Body: DFS stack<locations> S; mark start visited; S.push(start); While (S is not empty) http://www.cs.fsu.edu/~lacher/courses/cop3330/lectures/adt_intro/script.html#link1 5/11

t = S.top(); if (t == goal) Success(S); return; if (t has unvisited neighbors) choose a next unvisited neighbor n; mark n visited; S.push(n); else BackTrack(S); Failure(S); BackTrack(S) while (!S.empty() && S.top() has no unvisited neighbors) S.pop(); Success(S) cout << "Path from " << goal << " to " << start << ": "; while (!S.empty()) output (S.top()); S.pop(); Failure(S) cout << "No path from " << start << " to " << goal; while (!S.empty()) S.pop(); Outcome. If there is a path in G from start to goal, DFS finds one such path. Proof. We use mathematical induction on the number n of vertices in G. Base case (n = 1). If G has only one vertex, it must be both start and goal. There is a path, and that path is discovered by the algorithm by looking at S.top() immediately after initialization with S.push(start). Induction step. Assume that the result is true for any graph of size less than n, and consider our given graph G of size n. Suppose that there is a path in G from start to goal. Let H be the subgraph of G obtained by removing goal and the edges incident to goal. Note that H contains all vertices of G except goal, and all but the last edge of the known path from start to goal. The induction hypothesis implies that DFS finds a path in H from start to the end of this truncated path, a neighbor of goal. This neighbor of goal is thus at the top of S at the point of successfully finding this path. http://www.cs.fsu.edu/~lacher/courses/cop3330/lectures/adt_intro/script.html#link1 6/11

The algorithm runs in exactly the same way on G, either finding a path involving a different neighbor of goal or arriving at the state with a neighbor of goal at the top of S. Since goal is an unvisited neighbor of the top of S, the algorithm pushes goal onto S before any more backtracking can occur. The next cycle of the loop discovers goal at the top of S. It remains to show that the contents of the stack form a path from start to goal. Note that each time a vertex is pushed onto S, it is a neighbor of the top of S. Thus the contents of S do form a path from the bottom (start) to the top (goal). Breadth-First Search (BFS) If DFS is "go deep" then BFS is "go wide". We could compare the two by solving mazes. DFS solves a maze as a single person might: search as far down a path as possible, and when blocked, retreat to the first place from which the search can continue. Stop when the goal is found or the possibilities are exhausted. BFS, on the other hand, solves a maze using a committee. The committee starts out together. Every time there is a fork in the maze, the committee breaks into subcommittees and sends a subcommittee down each path of the fork. The first subcommittee arriving at the goal blows a whistle, or else the process continues until all possibilities have been exhausted. Note that the committee arriving at the goal first has found the shortest path solving the maze, and the length of this path is the distance from the start to the goal. BFS (cont.) A queue of locations is a convenient way to organize BFS. This slide depicts a setup for BFS consisting of a graph, a starting vertex, a goal vertex, and a queue. The contents of the queue during the run of the BFS algorithm are also shown. The following is a pseudo-code version of the queue-based BFS algorithm. Assumptions: G is a graph with vertices start and goal. Body: BFS queue<locations> Q; mark start visited; Q.push(start); While (Q is not empty) t = Q.front(); for each unvisited neighbor n of t Q.push(n); if (n == goal) Success(S); return; Q.pop(); Failure(Q); http://www.cs.fsu.edu/~lacher/courses/cop3330/lectures/adt_intro/script.html#link1 7/11

Success() // must build solution using backtrack pointers Outcome. If there is a path in G from start to goal, BFS finds a shortest such path. This version of BFS, like DFS above, is goal-oriented. A broader version that searches an entire graph is discussed very thoroughly in Chapter VI of [Cormen], including proofs of correctness and runtime analysis. The algorithm above would need some embellishment to keep track of data, depending on the desired outome. We postpone further analysis of BFS and DFS to COP 4531. Evaluating Postfix Expressions -- Algorithm The process of evaluating expressions is an important part of most computer programs, and typically breaks into two distinct steps: 1. Translation of infix expression to postfix expression 2. Evaluation of postfix expression The first step is usually accomplished in software, either compiler or interpreter. The second step is usually accomplished with significant hardware support. However, both of these steps are facilitated with the stack ADT. We discuss the evaluation algorithm here, and discuss the translation algorithm after introducing our stack implementation later in this chapter. Both translation and evaluation use the concept of token. A token is an atomic symbol in a computer program or computation. For example, the code fragment int num1, num2; cin >> num1; num2 = num1 + 5; my_function(num2); contains the following stream of tokens (written vertically for clarity): int num1, num2 ; cin >> num1 ; num2 = num1 + 5 ; my_function ( num2 ) ; and for the expression http://www.cs.fsu.edu/~lacher/courses/cop3330/lectures/adt_intro/script.html#link1 8/11

z = 25 + x*(y - 5) the token stream is z = 25 + x * ( y - 5 ) The postfix evaluation algorithm is described in the following pseudo code: Evaluate(postfix expression) uses stack of tokens; while (expression is not empty) t = next token; if (t is operand) push t onto stack else // t is an operator pop operands for t off stack; evaluate t on these operands; push result onto stack; // at this point, there should be exactly one operand on the stack return top of stack; Evaluating Postfix Expressions -- Example The slide illustrates the evaluation algorithm applied to the expression 1 2 3 + 4 * + 5 + which is the postfix translate of the prefix expression 1 + (2 + 3) * 4 + 5. The stack operations and the stack state after each operation are shown. Runtime Stack and Recursion Most modern programming languages use a runtime environment that includes a runtime stack of function activation records and a runtime heap where dynamic allocation and de-allocation of memory is managed. (The latter may be automated by a garbage collection system -- a slow, but programmer-friendly, memory management system.) As a C++ executable is loaded, a set of contiguous memory addresses is assigned to it by the http://www.cs.fsu.edu/~lacher/courses/cop3330/lectures/adt_intro/script.html#link1 9/11

operating system. This interval [bottom, top] of addresses is the address space assigned to the running program. The address space is organized as follows: [static stack -->... <-- heap] where '[' denotes the bottom of the address space and ']' denotes the top. The static area is a fixed size and contains such things as the executable code for all program functions, locations for global variables and constants, and a symbol table in which identifiers are matched with addresses (relative to '['). The stack and heap are dynamic in size. The stack grows upward in address space and the heap grows downward. If the stack and heap collide, then memory allocated to the program has been used up and the program crashes. (With virtual memory addressing and no limits set for the program, such a crash is "virtually" impossible. But without virtual memory, or with memory limits set, this kind of crash can and often does occur.) The stack is the mechanism by which the runtime state of the program is maintained. The stack element type is AR or activation record. An activation record contains all essential information about a function, including: function name, location of function code, function parameters, function return value, and location where control is returned (location of function call). In C/C++, execution begins by pushing the activation record of main() onto the newly created stack. For every function call, a new activation record is pushed onto the runtime stack. Thus the top of the stack is always a record of the currently running function. Whenever a function returns, its activation record is popped off the stack and control is returned to the calling process (whose activation record is now at the top of the stack). Execution ends when main() returns and the stack is popped to the empty state. The runtime stack requires no restriction on which function activation records may be used at any given point in an executing program. In particular, there is no reason not to permit the same function to be activated twice or more in succession. This innocuous observation is the basis for languages such as C++ to implement recursion. In the context of programming, recursion is the phenomenon of a function calling itself as in the implementation int fib (int n) if (n <= 0) return 0; if (n == 1) return 1; return fib(n - 2) + fib(n -1); or a finite sequence of functions each calling the next until the last calls the first, as in the implementation int F (int n) if (n <= 0) return 1; return G(n); int G (int n) return n * F(n - 1); The number of functions involved in the circular chain of calls is the order of the recursion. (The first example above is order one recursion and the second is order two recursion.) Recursion and http://www.cs.fsu.edu/~lacher/courses/cop3330/lectures/adt_intro/script.html#link1 10/11

recursive programming are important topics to which we return in a later chapter. http://www.cs.fsu.edu/~lacher/courses/cop3330/lectures/adt_intro/script.html#link1 11/11