MORE RECURSIVE FUNCTIONS

Similar documents
Homework 7: Subsets Due: 11:59 PM, Oct 23, 2018

Programming Paradigms

Intro. Scheme Basics. scm> 5 5. scm>

COMP 161 Lecture Notes 16 Analyzing Search and Sort

Linked Lists. What is a Linked List?

In our first lecture on sets and set theory, we introduced a bunch of new symbols and terminology.

PROGRAMMING IN HASKELL. CS Chapter 6 - Recursive Functions

Haskell Review COMP360

Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute. Module 02 Lecture - 45 Memoization

Title: Recursion and Higher Order Functions

MITOCW watch?v=zm5mw5nkzjg

PROFESSOR: Last time, we took a look at an explicit control evaluator for Lisp, and that bridged the gap between

Hi everyone. I hope everyone had a good Fourth of July. Today we're going to be covering graph search. Now, whenever we bring up graph algorithms, we

CSE143 Notes for Monday, 4/25/11

RECURSION. Week 6 Laboratory for Introduction to Programming and Algorithms Uwe R. Zimmer based on material by James Barker. Pre-Laboratory Checklist


CPL 2016, week 10. Clojure functional core. Oleg Batrashev. April 11, Institute of Computer Science, Tartu, Estonia

1. Stack overflow & underflow 2. Implementation: partially filled array & linked list 3. Applications: reverse string, backtracking

Haske k ll An introduction to Functional functional programming using Haskell Purely Lazy Example: QuickSort in Java Example: QuickSort in Haskell

Programming Paradigms

An introduction introduction to functional functional programming programming using usin Haskell

1) What is the primary purpose of template functions? 2) Suppose bag is a template class, what is the syntax for declaring a bag b of integers?

Macros & Streams Spring 2018 Discussion 9: April 11, Macros

Formal Methods of Software Design, Eric Hehner, segment 24 page 1 out of 5

Lecture 4: Higher Order Functions

Announcements. Project 5 is on the street. Second part is essay questions for CoS teaming requirements.

List Functions, and Higher-Order Functions

Using Visual Studio.NET: IntelliSense and Debugging

Functional Programming for Imperative Programmers

A lot of people make repeated mistakes of not calling their functions and getting errors. Make sure you're calling your functions.

This lecture covers: Prolog s execution strategy explained more precisely. Revision of the elementary Prolog data types

CS61A Summer 2010 George Wang, Jonathan Kotker, Seshadri Mahalingam, Eric Tzeng, Steven Tang

Lecture 8: Summary of Haskell course + Type Level Programming

vi Primer Adapted from:

CSE341: Programming Languages Lecture 6 Nested Patterns Exceptions Tail Recursion. Zach Tatlock Winter 2018

CISC220 Lab 2: Due Wed, Sep 26 at Midnight (110 pts)

Shell CSCE 314 TAMU. Functions continued

MITOCW watch?v=4dj1oguwtem

2D ANIMATION & INTERACTION COURSE WEEK 5 QUICK REFERENCE

Admin. How's the project coming? After these slides, read chapter 13 in your book. Quizzes will return

Introduction to Erlang. Franck Petit / Sebastien Tixeuil

Lecture 10: Lindenmayer Systems

Problem 1. Remove consecutive duplicates (6 points, 11 mintues)

Unit #3: Recursion, Induction, and Loop Invariants

Assignment 1. Stefano Guerra. October 11, The following observation follows directly from the definition of in order and pre order traversal:

R10 SET - 1. Code No: R II B. Tech I Semester, Supplementary Examinations, May

Declarative concurrency. March 3, 2014

Lecture 35: Analysis Review & Tail Recursion 10:00 AM, Nov 28, 2018

An Illustrated Guide to Shell Magic: Standard I/O & Redirection

3.4. FOR-LOOPS 65. for <v a r i a b l e > in < sequence >:

1 Lecture 5: Advanced Data Structures

Draw a diagram of an empty circular queue and describe it to the reader.

Notes - Recursion. A geeky definition of recursion is as follows: Recursion see Recursion.

Introduction to Programming in C Department of Computer Science and Engineering. Lecture No. #17. Loops: Break Statement

CircuitPython 101: Working with Lists, Iterators and Generators

The following content is provided under a Creative Commons license. Your support

Richard Feynman, Lectures on Computation

COMP 202 Recursion. CONTENTS: Recursion. COMP Recursion 1

Linked Lists. private int num; // payload for the node private Node next; // pointer to the next node in the list }

Programming Paradigms

[key, Left subtree, Right subtree]

Topics. Introduction to Repetition Structures Often have to write code that performs the same task multiple times. Controlled Loop

RECURSIVE FUNCTIONS ON STACK

Lecture Notes on Memory Layout

MIPS Programming. A basic rule is: try to be mechanical (that is, don't be "tricky") when you translate high-level code into assembler code.

Condition Controlled Loops. Introduction to Programming - Python

Clean & Speed Up Windows with AWO

An Introduction to Python

Photoshop Tutorial: Basic Selections

1 Getting used to Python

Database Systems. Project 2

Defining Functions. CSc 372. Comparative Programming Languages. 5 : Haskell Function Definitions. Department of Computer Science University of Arizona

Week 5 Tutorial Structural Induction

Linked List Implementation of Queues

Topic B: Backtracking and Lists

MITOCW watch?v=w_-sx4vr53m

Fortunately, you only need to know 10% of what's in the main page to get 90% of the benefit. This page will show you that 10%.

Algorithm Design and Recursion. Search and Sort Algorithms

Formal Methods of Software Design, Eric Hehner, segment 1 page 1 out of 5

Reading 8 : Recursion

15-122: Principles of Imperative Computation, Fall 2015

Cpt S 122 Data Structures. Course Review Midterm Exam # 1

Jeffrey D. Ullman Stanford University/Infolab

G Programming Languages - Fall 2012

We'll dive right in with an example linked list. Our list will hold the values 1, 2, and 3.!

Chapter 10 Recursion

A Second Look At ML. Chapter Seven Modern Programming Languages, 2nd ed. 1

CISC-235. At this point we fnally turned our atention to a data structure: the stack

Decisions, Decisions. Testing, testing C H A P T E R 7

You just told Matlab to create two strings of letters 'I have no idea what I m doing' and to name those strings str1 and str2.

Hi everyone. Starting this week I'm going to make a couple tweaks to how section is run. The first thing is that I'm going to go over all the slides

Instructor: Craig Duckett. Lecture 04: Thursday, April 5, Relationships

Blitz2D Newbies: Definitive Guide to Types by MutteringGoblin

Logic Programming. Using Data Structures Part 2. Temur Kutsia

CS103 Spring 2018 Mathematical Vocabulary

>print "hello" [a command in the Python programming language]

Thoughts on Assignment 4 Haskell: Flow of Control

Unit #2: Recursion, Induction, and Loop Invariants

The fringe of a binary tree are the values in left-to-right order. For example, the fringe of the following tree:

CSC 1351: Quiz 6: Sort and Search

Transcription:

MORE RECURSIVE FUNCTIONS We'll write a few more recursive functions, just to get in the habit a bit more. After all, recursion being the only looping construct that exists in Erlang (except list comprehensions), it's one of the most important concepts to understand. It's also useful in every other functional programming language you'll try afterwards, so take notes! The first function we'll write will be duplicate/2. This function takes an integer as its first parameter and then any other term as its second parameter. It will then create a list of as many copies of the term as specified by the integer. Like before, thinking of the base case first is what might help you get going. For duplicate/2, asking to repeat something 0 time is the most basic thing that can be done. All we have to do is return an empty list, no matter what the term is. Every other case needs to try and get to the base case by calling the function itself. We will also forbid negative values for the integer, because you can't duplicate something -n times: duplicate(0,_) -> []; duplicate(n,term) when N > 0 -> [Term duplicate(n-1,term)]. Once the basic recursive function is found, it becomes easier to transform it into a tail recursive one by moving the list construction into a temporary variable: tail_duplicate(n,term) -> tail_duplicate(n,term,[]).

tail_duplicate(0,_,list) -> List; tail_duplicate(n,term,list) when N > 0 -> tail_duplicate(n-1, Term, [Term List]). Success! I want to change the subject a little bit here by drawing a parallel between tail recursion and a while loop. Our tail_duplicate/2 function has all the usual parts of a while loop. If we were to imagine a while loop in a fictional language with Erlang-like syntax, our function could look a bit like this: function(n, Term) -> while N > 0 -> List = [Term List], N = N-1 end, List. Note that all the elements are there in both the fictional language and in Erlang. Only their position changes. This demonstrates that a proper tail recursive function is similar to an iterative process, like a while loop. There's also an interesting property that we can 'discover' when we compare recursive and tail recursive functions by writing a reverse/1 function, which will reverse a list of terms. For such a function, the base case is an empty list, for which we have nothing to reverse. We can just return an empty list when that happens. Every other possibility should try to converge to the base case by calling itself, like with duplicate/2. Our function is going to iterate through the list by pattern matching [H T] and then putting H after the rest of the list: reverse([]) -> []; reverse([h T]) -> reverse(t)++[h]. On long lists, this will be a true nightmare: not only will we stack up all our append operations, but we will need to traverse the whole list for every single of these appends until the last one! For visual readers, the many checks can be represented as:

reverse([1,2,3,4]) = [4]++[3]++[2]++[1] = [4,3]++[2]++[1] = [4,3,2]++[1] = [4,3,2,1] This is where tail recursion comes to the rescue. Because we will use an accumulator and will add a new head to it every time, our list will automatically be reversed. Let's first see the implementation: tail_reverse(l) -> tail_reverse(l,[]). tail_reverse([],acc) -> Acc; tail_reverse([h T],Acc) -> tail_reverse(t, [H Acc]). If we represent this one in a similar manner as the normal version, we get: tail_reverse([1,2,3,4]) = tail_reverse([2,3,4], [1]) [2,1]) = tail_reverse([3,4],

[3,2,1]) = tail_reverse([4], [4,3,2,1]) = tail_reverse([], = [4,3,2,1] Which shows that the number of elements visited to reverse our list is now linear: not only do we avoid growing the stack, we also do our operations in a much more efficient manner! Another function to implement could be sublist/2, which takes a list L and an integer N, and returns the N first elements of the list. As an example, sublist([1,2,3,4,5,6],3) would return [1,2,3]. Again, the base case is trying to obtain 0 elements from a list. Take care however, because sublist/2 is a bit different. You've got a second base case when the list passed is empty! If we do not check for empty lists, an error would be thrown when calling recursive:sublist([1],2). while we want [1] instead. Once this is defined, the recursive part of the function only has to cycle through the list, keeping elements as it goes, until it hits one of the base cases: sublist(_,0) -> []; sublist([],_) -> []; sublist([h T],N) when N > 0 -> [H sublist(t,n-1)]. Which can then be transformed to a tail recursive form in the same manner as before: tail_sublist(l, N) -> tail_sublist(l, N, []). tail_sublist(_, 0, SubList) -> SubList; tail_sublist([], _, SubList) -> SubList; tail_sublist([h T], N, SubList) when N > 0 -> tail_sublist(t, N-1, [H SubList]).

There's a flaw in this function. A fatal flaw! We use a list as an accumulator in exactly the same manner we did to reverse our list. If you compile this function as is, sublist([1,2,3,4,5,6],3) would not return [1,2,3], but [3,2,1]. The only thing we can do is take the final result and reverse it ourselves. Just change the tail_sublist/2 call and leave all our recursive logic intact: tail_sublist(l, N) -> reverse(tail_sublist(l, N, [])). The final result will be ordered correctly. It might seem like reversing our list after a tail recursive call is a waste of time and you would be partially right (we still save memory doing this). On shorter lists, you might find your code is running faster with normal recursive calls than with tail recursive calls for this reason, but as your data sets grow, reversing the list will be comparatively lighter. Note: instead of writing your own reverse/1 function, you should use lists:reverse/1. It's been used so much for tail recursive calls that the maintainers and developers of Erlang decided to turn it into a BIF. Your lists can now benefit from extremely fast reversal (thanks to functions written in C) which will make the reversal disadvantage a lot less obvious. The rest of the code in this chapter will make use of our own reversal function, but after that you should not use it ever again. To push things a bit further, we'll write a zipping function. A zipping function will take two lists of same length as parameters and will join them as a list of tuples which all hold two terms. Our own zip/2 function will behave this way: 1> recursive:zip([a,b,c],[1,2,3]). [{a,1},{b,2},{c,3}] Given we want our parameters to both have the same length, the base case will be zipping two empty lists: zip([],[]) -> []; zip([x Xs],[Y Ys]) -> [{X,Y} zip(xs,ys)]. However, if you wanted a more lenient zip function, you could decide to have it finish whenever one of the two list is done. In this scenario, you therefore have two base cases: lenient_zip([],_) -> [];

lenient_zip(_,[]) -> []; lenient_zip([x Xs],[Y Ys]) - > [{X,Y} lenient_zip(xs,ys)]. Notice that no matter what our base cases are, the recursive part of the function remains the same. I would suggest you try and make your own tail recursive versions of zip/2 and lenient_zip/2, just to make sure you fully understand how to make tail recursive functions: they'll be one of the central concepts of larger applications where our main loops will be made that way. If you want to check your answers, take a look at my implementation of recursive.erl, more precisely thetail_zip/2 and tail_lenient_zip/3 functions. Note: tail recursion as seen here is not making the memory grow because when the virtual machine sees a function calling itself in a tail position (the last expression to be evaluated in a function), it eliminates the current stack frame. This is called tail-call optimisation (TCO) and it is a special case of a more general optimisation named Last Call Optimisation (LCO). LCO is done whenever the last expression to be evaluated in a function body is another function call. When that happens, as with TCO, the Erlang VM avoids storing the stack frame. As such tail recursion is also possible between multiple functions. As an example, the chain of functions a() -> b(). b() -> c(). c() -> a(). will effectively create an infinite loop that won't go out of memory as LCO avoids overflowing the stack. This principle, combined with our use of accumulators is what makes tail recursion useful.