Lecture 7: General Computation Models

Similar documents
Programming Language Concepts, CS2104 Lecture 7

Declarative Computation Model

Programming Language Concepts, cs2104 Lecture 09 ( )

Declarative Concurrency (CTM 4)

A Second Look At ML. Chapter Seven Modern Programming Languages, 2nd ed. 1

Carlos Varela RPI September 19, Adapted with permission from: Seif Haridi KTH Peter Van Roy UCL

Declarative Concurrency (CTM 4)

Exam for 2G1512 DatalogiII, Apr 12th

Programming Language Concepts, cs2104 Lecture 04 ( )

CS 457/557: Functional Languages

State, Object-Oriented Programming Explicit State, Polymorphism (CTM ) Objects, Classes, and Inheritance (CTM )

On Academic Dishonesty. Declarative Computation Model. Single assignment store. Single assignment store (2) Single assignment store (3)

Declarative Programming Techniques

Lecture 5: Declarative Programming. The Declarative Kernel Language Machine. September 12th, 2011

Logic Programming (CTM ) Constraint Programming: Constraints and Computation Spaces

PROGRAMMING IN HASKELL. CS Chapter 6 - Recursive Functions

Homework 4: Declarative Concurrency

Lazy evaluation. Lazy evaluation (3) Lazy evaluation (2) Lazy execution. Example. Declarative Concurrency. Lazy Execution (VRH 4.

Lecture 19: Functions, Types and Data Structures in Haskell

Types in Programming Languages Dynamic and Static Typing, Type Inference (CTM 2.8.3, EPL* 4) Abstract Data Types (CTM 3.7) Monads (GIH** 9)

Declarative concurrency. March 3, 2014

DATABASE THEORY. Lecture 12: Evaluation of Datalog (2) TU Dresden, 30 June Markus Krötzsch

Programming Language Concepts, cs2104 Lecture 01 ( )

News. Programming Languages. Complex types: Lists. Recap: ML s Holy Trinity. CSE 130: Spring 2012

Lecture 21: Relational Programming II. November 15th, 2011

Disk Accesses. CS 361, Lecture 25. B-Tree Properties. Outline

CMSC 330, Fall 2013, Practice Problem 3 Solutions

Final Examination: Topics and Sample Problems

Theorem Proving Principles, Techniques, Applications Recursion

Topic 5: Examples and higher-order functions

CS61B Lecture #20: Trees. Last modified: Mon Oct 8 21:21: CS61B: Lecture #20 1

Shell CSCE 314 TAMU. Functions continued

CS24 Week 8 Lecture 1

CS 240 Fall Mike Lam, Professor. Priority Queues and Heaps

Programming II (CS300)

Programming Paradigms

Week 5 Tutorial Structural Induction

Programming Language Concepts, CS2104 Lab/Assignment 0 (Lab Session, 3-6pm, 24 th Aug 2007) Deadline for Lab0 : 5pm 28Aug 2007 Tue (submit via IVLE)

OCaml. ML Flow. Complex types: Lists. Complex types: Lists. The PL for the discerning hacker. All elements must have same type.

DATABASE THEORY. Lecture 15: Datalog Evaluation (2) TU Dresden, 26th June Markus Krötzsch Knowledge-Based Systems

CMSC 330, Fall 2013, Practice Problems 3

Standard ML. Data types. ML Datatypes.1

Trees. Eric McCreath

An Introduction to Functions

Lecture 6: The Declarative Kernel Language Machine. September 13th, 2011

Foundations of Computation

CMSC 330: Organization of Programming Languages. Functional Programming with Lists

INTRODUCTION TO HASKELL

Lecture 9 March 4, 2010

Chapter 6 Abstract Datatypes

CSE-321 Programming Languages 2012 Midterm

301AA - Advanced Programming

CSc 372 Comparative Programming Languages

CMSC 330: Organization of Programming Languages. Functional Programming with Lists

CSc 372. Comparative Programming Languages. 8 : Haskell Function Examples. Department of Computer Science University of Arizona

Programming Languages

1 Binary trees. 1 Binary search trees. 1 Traversal. 1 Insertion. 1 An empty structure is an empty tree.

CSE341, Fall 2011, Midterm Examination October 31, 2011

# true;; - : bool = true. # false;; - : bool = false 9/10/ // = {s (5, "hi", 3.2), c 4, a 1, b 5} 9/10/2017 4

Lecture 6: Sequential Sorting

Verification of an ML compiler. Lecture 3: Closures, closure conversion and call optimisations

Principles of Programming Languages

15 212: Principles of Programming. Some Notes on Continuations

CSC148 Week 7. Larry Zhang

Booleans (aka Truth Values) Programming Languages and Compilers (CS 421) Booleans and Short-Circuit Evaluation. Tuples as Values.

Higher-Order Functions

CSCE 314 TAMU Fall CSCE 314: Programming Languages Dr. Flemming Andersen. Haskell Functions

Lecture 8: Recursion and Iteration. Exceptions. Declarative Programming.

CSE 341 : Programming Languages Midterm, Spring 2015

Data Structures. Datatype. Data structure. Today: Two examples. A model of something that we want to represent in our program

Programming Languages

CSE341 Spring 2017, Midterm Examination April 28, 2017

CSE341 Spring 2016, Midterm Examination April 29, 2016

CSE341 Spring 2016, Midterm Examination April 29, 2016

UNIVERSITETET I OSLO

CSE 341 : Programming Languages

Lists. Adrian Groza. Department of Computer Science Technical University of Cluj-Napoca

Advanced Algorithm Design and Analysis (Lecture 12) SW5 fall 2005 Simonas Šaltenis E1-215b

Isabelle s meta-logic. p.1

CS558 Programming Languages

Applicative, traversable, foldable

Stacks, Queues & Trees. The Stack Interface. The Stack Interface. Harald Gall, Prof. Dr.

Consider the following EBNF definition of a small language:

Lambda Calculus and Type Inference

Binary Search Trees. What is a Binary Search Tree?

Algorithms: Lecture 10. Chalmers University of Technology

Project 2: Scheme Interpreter

CS 340 Spring 2019 Midterm Exam

A general introduction to Functional Programming using Haskell

Programming II (CS300)

Advanced features of Functional Programming (Haskell)

Parallel and Sequential Data Structures and Algorithms Lecture (Spring 2012) Lecture 25 Suffix Arrays

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17

Introduction to the Lambda Calculus

DDS Dynamic Search Trees

Fall Lecture 13 Thursday, October 11

Typed Racket: Racket with Static Types

1 Dynamic Programming

Programming Languages

Informatics 1 Functional Programming Lecture 11. Data Representation. Don Sannella University of Edinburgh

Transcription:

Lecture 7: General Computation Models Declarative Programming Techniques (Part 2) November 15, 2017

Difference lists Difference list: pair L1#L2 of two lists, such that L2 is a suffix of the first list: L2 may have unbound tail. L1#L2 represents the list obtained by dropping suffix L2 from L1. Examples of difference lists X#X % Represents the empty list nil#nil % idem [a]#[a] % idem (a b c X)#X % represents [a b c] (a b c d X)#(d X) % idem [a b c d]#[d] % idem

Features and limitations of difference lists If S1#E1 is a difference list with E1 an unbound variable, we can app another difference list S2#E2 to S1#E1 in constant time: declare fun {AppD D1 D2} S1#E1 = D1 S2#E2 = D2 in E1 = S2 S1#E2 Example local X Y in {Browse {AppD (1 2 3 X)#X (4 5 Y)#Y}} displays the value of (1 2 3 4 5 Y)#Y LIMITATIONS: difference lists can be apped only once they can be used only in special circumstances REMARK: Difference lists originate from Prolog and LP, and are the basis of many advanced Prolog programming techniques.

Programming techniques with difference lists Flattening a nested list: Implementation 1 The flattening of a nested list has all elements of the nested list but is no longer nested. We can reason by induction on the syntax (BNF) of nested lists: Flatten of nil is nil. Flatten of X Xr where X is a nested list, is Z where flatten of X is Y, flatten of Xr is Yr, and app Y and Yr to get Z. Flatten of X Xr where X is a not a list, is Z where flatten of Xr is Yr, and Z is X Yr fun {Flatten Xs} case Xs of nil then nil [] X Xr andthen {IsList X} then {App {Flatten X} {Flatten Xr}} [] X Xr then X {Flatten Xr} % this call will display [a b c d e f] {Browse {Flatten [[a b] [[c] [d]] nil [e [f]]]}}

Programming techniques with difference lists Flattening a nested list: Implementation 2 (with difference lists) Flatten of nil is X#X (empty difference list). Flatten of X Xr where X is a nested list, is Y1 Y4 where flatten of X is Y1#Y2, flatten of Xr is Y3#Y4, and equate Y2 and Y3 to app the difference lists. Flatten of X Xr where X is a not a list, is (X Y1)#Y2 where flatten of Xr is Y1#Y2 fun {Flatten Xs} proc {FlattenD Xs?Ds} case Xs of nil then Y in Ds=Y#Y [] X Xr andthen {IsList X} then Y1 Y2 Y4 in Ds=Y1#Y4 {FlattenD X Y1#Y2} {FlattenD Xr Y2#Y4} [] X Xr then Y1 Y2 in Ds=(X Y1)#Y2 {FlattenD Xr Y1#Y2} Ys in {FlattenD Xs Ys#nil} Ys

Programming techniques with difference lists Flattening a nested list: Remarks about implementation 2 It is efficient. The difference list returned by FlattenD is converted into a regular list by binding its second arg. to nil. FlattenD is written as proc (instead of fun) because we need only part of its output argument (which is the last argument). We can also represent difference lists by two arguments slightly faster implementation: fun {Flatten Xs} proc {FlattenD Xs?S E} case Xs of nil then S=E [] X Xr andthen {IsList X} then Y2 in {FlattenD X S Y2} {FlattenD Xr Y2 E} [] X Xr then Y1 in S=X Y1 {FlattenD Xr Y1 E} Ys in {FlattenD Xs Ys nil} Ys

Programming techniques with difference lists Flattening a nested list: a final improvement We can write FlattenD as a function, by making parameter S the output: fun {Flatten Xs} fun {FlattenD Xs E} case Xs of nil then E [] X Xr andthen {IsList X} then {FlattenD X {FlattenD Xr E}} [] X Xr then X {FlattenD Xr E} in {FlattenD Xs nil} REMARK: During the call {FlattenD Xs E}, the parameter E gives the rest of the output after the flattening of the elements from Xs is exhausted.

Programming techniques with difference lists Reversing a list The naive version with difference lists: Reverse of nil is X#X (empty difference list). Reverse of X Xs is Z, where reverse of Xs is Y1#Y2 and app Y1#Y2 and (X Y)#Y together to get Z. The last (recursive) case can be rewritten as follows: Reverse of X Xs is Y1#Y, where reverse of Xs is Y1#Y2 and equate Y2 and (X Y). or, even better: Reverse of X Xs is Y1#Y, where reverse of Xs is Y1#(X Y) fun {Reverse Xs} % final version of Reverse proc {ReverseD Xs?Y1 Y} case Xs of nil then Y1=Y [] X Xr then {ReverseD Xr Y1 X Y} Y1 in {ReverseD Xs Y1 nil} Y1

Programming techniques with difference lists Queues: Implementation 1 (naìve) A queue (FIFO) is a sequence of elements with an insert and a delete operation elements are inserted at one of the queue, and deleted from the other A naive (and slow) implementation, using list L to represent the queue content: inserting element X gives new queue X L deleting X from non-empty queue L is done by proc {ButLast L?X? L1} % L1 will be the new queue case L of [Y] then X=Y L1=nil [] Y L2 then L3 in L1=Y L3 {ButLast L2 X L3}

Programming techniques with difference lists Implementation 2: Queues with amortized constant-time operations Main idea: represent the content of the queue by a pair q(f R), where F, R are lists: F represents the front of the queue, and R represents the back of the queue in reversed form NOTE: the queue content is always {App F {Reverse R}}. declare NewQueue Insert Delete IsEmpty fun {NewQueue} q(nil nil) % create an empty queue fun {Check Q} % move elements from back to front case Q of q(nil R) then q({reverse R} nil) else Q fun {Insert Q X} % insert X to the back of queue case Q of q(f R) then {Check q(f X R)} fun {Delete Q X} % delete element from front of queue case Q of q(f R) then F1 in F=X F1 {Check q(f1 R)} fun {IsEmpty Q} % check if the queue is empty case Q of q(f R) then F==nil

Programming techniques with difference lists Implementation 3: Queues with worst-case constant-time operations Main idea: represent the queue by a triple q(n S E) where E is an unbound variable: The content of the queue is given by the difference list S#E N is the number of elements in the queue The queue operations are defined as follows: declare NewQueue Insert Delete IsEmpty fun {NewQueue} X in q(0 X X) % create an empty queue fun {Insert Q X} % insert X in queue Q case Q of q(n S E) then E1 in E=X E1 q(n+1 S E1) fun {Delete Q?X} % delete X from queue Q case Q of q(n S E) then S1 in S=X S1 q(n-1 S1 E) fun {IsEmpty Q} % check if queue Q is empty case Q of q(n S E) then N==0

Programming techniques with difference lists Similarities of the last two implementations (1) This example works for both implementations: declare Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q1={NewQueue} Q2={Insert Q1 peter} Q3={Insert Q2 paul} local X in Q4={Delete Q3 X} {Browse X} Q5={Insert Q4 mary} local X in Q6={Delete Q5 X} {Browse X} local X in Q7={Delete Q6 X} {Browse X} For example, if we use implementation 2 (with amortized cost), then declare creates store variables q 1,...,q 7 and makes every Qi refer to q i The next 3 statements bind q 1 to q(nil nil), q 2 to q([peter] nil), and q 3 to q([peter] [paul]) Next: X refers to newly created store variable x 1, x 1 is bound to peter, and q 4 is bound to q([paul] nil) Next: q 5 si bound to q([paul] [mary]) Next: X refers to newly created store variable x 2, x 2 is bound to paul, and q 6 is bound to q([mary] nil) Finally, X refers to newly created store variable x 3, x 3 is bound to mary, and q 7 is bound to q(nil nil)

Programming techniques with difference lists Similarities of the last two implementations (2) declare Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q1={NewQueue} Q2={Insert Q1 peter} Q3={Insert Q2 paul} local X in Q4={Delete Q3 X} {Browse X} Q5={Insert Q4 mary} local X in Q6={Delete Q5 X} {Browse X} local X in Q7={Delete Q6 X} {Browse X} If we use implementation 3 (with difference lists), then declare creates store variables q 1,...,q 7 and makes every Qi refer to q i Next statement binds q 1 to q(0 x x). Next statement binds x to peter e 1, and q 2 to q(1 peter e 1 e 1 ) Next statement binds e 1 to paul e 2, and q 3 to q(2 peter paul e 2 e 2 ). Next: X refers to newly created store variable x 1, x 1 is bound to peter, and q 4 is bound to q(1 paul e 2 e 2 ). Next statement binds e 2 to mary e 3, and q 5 to q(2 paul mary e 3 e 3 ). Next: X refers to newly created store variable x 2, x 2 is bound to paul, and q 6 is bound to q(1 mary e 3 e 3 ). Next: X refers to newly created store variable x 2, x 2 is bound to mary, and q 7 is bound to q(0 e 3 e 3 ).

Programming techniques with difference lists Differences between the last two implementations If we try to delete an element from an empty stack Q, by feeding declare X Q1 Q1={Delete Q X} {Browse X} then Oz fails for implementation 2 works for implementation 3, because: Note that initially Q refers to a value of the form q(0 x x), which represents an empty stack Q1={Delete Q X} creates two new store variables: x 1 for X, and s 1 ; binds x to x 1 s 1 and returns the value q(-1 s 1 x 1 s 1 ) for Q1. The element X deleted from an empty queue refers to an unbound variable x 1. x 1 will get bound to the first value which will be inserted in queue Q1: declare Q2 Q2 = {Insert Q1 bill} binds x 1 to bill, and makes Q2 refer to q(0 s 1 s 1 ).

Persistent and ephemeral data structures A data structure is persistent if its internal representation is not affected by the operations performed on it directly or indirectly: there can be only one version in use at any time. A data structure which is not presistent is called ephemeral. Implementation 2 provides a persistent queue. The following example illustrates the fact that we can safely work with multiple references to the same queue: declare Q1 Q2 Q3 Q4 Q5 Q6 Q1={NewQueue} Q2={Insert Q1 jon} Q3={Insert Q2 paul} Q4={Insert Q2 mary} local X in Q5={Delete Q3 X} {Browse X} local X in Q6={Delete Q4 X} {Browse X} Both Q3 and Q4 refer to Q2. After feeding the first four variable bindings, the environment is E = {Q1 q 1, Q2 q 2, Q3 q 2, Q4 q 4 }, and the store is: q 3 q q 2 q q 1 q q 4 q q 5 unbound q 6 unbound [jon] [paul] nil [mary] Subsequent operations do not affect the values of variables q 1, q 2, q 3, q 4, identified by Q1, Q2, Q3, Q4.

Persistent and ephemeral data structures Implementation 3 (with difference lists) is faster, because Insert and Delete are constant-time. Unfortunately, such queues are ephemeral. For example: declare Q1 Q2 Q3 Q4 Q5 Q6 Q1={NewQueue} Q2={Insert Q1 jon} Q3={Insert Q2 paul} Q4={Insert Q2 mary} declare creates six unbound store variables q 1, q 2, q 3, q 4, q 5, q 6 and adds Qi q i to the environment (1 i 6) Q1={NewQueue} binds q 1 to q(0 x x), where is a new unbound variable. Q2={Insert Q1 jon} binds x to jon y, and q 2 to q(1 jon y y). Q3={Insert Q2 paul} binds y to paul z, and q 3 to q(1 jon paul z z) side-effect: q 2 gets bound to q(1 jon paul z paul z) and the assumption that the third component of the queue representation is an unbound variable does not hold anymore Q4={Insert Q2 mary} fails because it tries to make compatible paul z with a value of the form mary e 1.

Queues (implementation with difference lists) How to work safely with ephemeral queues (implementation 3)? Workaround: define a Fork operation that takes as input an ephemeral data structure, and creates two identical copies of it. Examples proc {ForkD D?E?F} D1#nil=D E1#E0=E {App D1 E0 E1} F1#F0=F {App D1 F0 F1} in skip {ForkD D E F} binds E and F to two fresh copies of a difference list D whose last component is an unbound variable. D is consumed: it can not be used afterward in the same way (why?). proc {ForkQ Q?Q1?Q2} % forks an ephemeral queue (implem. 3) q(n S E)=Q q(n S1 E1)=Q1 q(n S2 E2)=Q2 in {ForkD S#E S1#E1 S2#E2}

Working with trees and ordered binary trees Recursive data structures: Tree ::= leaf tree( Value Tree 1... Tree n ) OBTree ::= leaf tree( OValue Value Tree 1 Tree 2 ) EXTRA ASSUMPTIONS FOR OBTree : OBValue is a subtype of Value that is totally ordered (e.g., numbers or strings). for each non-leaf node, all keys in the first subtree are less than the node key, and all keys in the second subtree are greater than the node key. Operations well supported by instances of OBTree : lookup insertion deletion

Operations on ordered binary trees Lookup and Insert fun {Lookup X T} case T of leaf then notfound [] tree(y V T1 T2) then if X<Y then {Lookup X T1} elseif X>Y then {Lookup X T2} else found(v) fun {Insert X V T} case T of leaf then tree(x V leaf leaf) [] tree(y W T1 T2) andthen X==Y then tree(x V T1 T2) [] tree(y W T1 T2) andthen X<Y then tree(y W {Insert X V T1} T2) [] tree(y W T1 T2) andthen X>Y then tree(y W T1 {Insert X V T2})

Operations on ordered binary trees Node deletion To delete a node Y form ordered binary tree T, we distinguish two cases: 1 At least one subtree of T is a leaf (easy case): Y T T leaf 2 Neither subtree of T is a leaf: To implement this case, we need an auxiliary function {RemoveSmallest T} which returns the content of the node Yp with minimum key in a tree T, and T without node Yp.

Operations on ordered binary trees RemoveSmallest and Delete fun {RemoveSmallest T} case T of leaf then none [] tree(y V T1 T2) then case {RemoveSmallest T1} of none then Y#V#T2 [] Yp#Vp#Tp then Yp#Vp#tree(Y V Tp T2) fun {Delete X T} case T of leaf then leaf [] tree(y W T1 T2) andthen X==Y then case {RemoveSmallest T2} of none then T1 [] Yp#Vp#Tp then tree(yp Vp T1 Tp) [] tree(y W T1 T2) andthen X<Y then tree(y W {Delete X T1} T2) [] tree(y W T1 T2) andthen X>Y then tree(y W T1 {Delete X T2})

Tree traversal Tree traversal = perform an operation on its nodes in some well-defined order. Depth-first traversal visits first the node itself, then the leftmost subtree, and then the rightmost subtree. the simplest form of tree traversal Example (print all key/value pairs of tree nodes) proc {DFS T} case T of leaf then skip [] tree(key Val L R) then {Browse Key#Val} {DFS L} {DFS R}

Depth-first tree traversal with result calculation Calculation of the list of all key/value pairs stored in the nodes of a tree, in the order given by depth-first traversal: proc {DFSAccLoop T S1?Sn} case T of leaf then Sn=S1 [] tree(key Val L R) then S2 S3 in S2=(Key#Val) S1 {DFSAccLoop L S2 S3} {DFSAccLoop R S3 Sn} fun {DFSAcc T} {Reverse {DFSAccLoop T nil $}} We can also calculate this list in the right order directly: proc {DFSAccLoop2 T S1?Sn} case T of leaf then S1=Sm [] tree(key Val L R) then S2 S3 in S1=(Key#Val) S2 {DFSAccLoop L S2 S3} {DFSAccLoop R S3 Sn} fun {DFSAcc2 T} {DFSAccLoop T $ nil}

Breadth-first tree traversal This traversal first traverses all nodes at depth 0, then all nodes at depth 1, and so forth, going one level deeper at a time. At each level, it traverses the nodes from left to right. To implement it, we need a queue to keep track of all the nodes at a given depth. proc {BFS T} fun {TreeInsert Q T} if T\=leaf then {Insert Q T} else Q proc {BFSQueue Q1} if {IsEmpty Q1} then skip else X Q2 Key Val L R in Q2={Delete Q1 X} tree(key Val L R)=X {Browse Key#Val} {BFSQueue {TreeInsert {TreeInsert Q2 L} R}} in {BFSQueue {TreeInsert {NewQueue} T}}

Depth-first tree traversal with result calculation Calculation of the list of all key/value pairs stored in the nodes of a tree, in the order given by breadth-first traversal: fun {BFSAcc T} fun {TreeInsert Q T} if T\=leaf then {Insert Q T} else Q proc {BFSQueue Q1?S1 Sn} if {IsEmpty Q1} then S1=Sn else X Q2 Key Val L R S2 in Q2={Delete Q1 X} tree(key Val L R)=X S1=Key#Val S2 {BFSQueue {TreeInsert {TreeInsert Q2 L} R} S2 Sn} in {BFSQueue {TreeInsert {NewQueue} T} $ nil}

References Peter van Roy, Seif Haridi: Concepts, Techniques, and Models of Computer Programming. The MIT Press, 2004. General Computation Models: Chapter 3: Declarative Programming Techniques. 3.4.