Resources matter. Orders of Growth of Processes. R(n)= (n 2 ) Orders of growth of processes. Partial trace for (ifact 4) Partial trace for (fact 4)

Similar documents
6.001 Notes: Section 4.1

Goals: Define computational problem instance of a computational problem algorithm pseudocode Briefly discuss recursive procedure big-o notation

CS1 Recitation. Week 2

Measuring Efficiency

12/30/2013 S. NALINI,AP/CSE

INTERPRETERS 8. 1 Calculator COMPUTER SCIENCE 61A. November 3, 2016

Computer Science 210 Data Structures Siena College Fall Topic Notes: Recursive Methods

Summer 2017 Discussion 10: July 25, Introduction. 2 Primitives and Define

Recursion continued. Programming using server Covers material done in Recitation. Part 2 Friday 8am to 4pm in CS110 lab

Recursion. COMS W1007 Introduction to Computer Science. Christopher Conway 26 June 2003

Building a system for symbolic differentiation

University of Massachusetts Lowell

Interpreters and Tail Calls Fall 2017 Discussion 8: November 1, 2017 Solutions. 1 Calculator. calc> (+ 2 2) 4

Unit-5 Dynamic Programming 2016

Programming and Data Structure

CMPSCI 187: Programming With Data Structures. Lecture 5: Analysis of Algorithms Overview 16 September 2011

Fall 2018 Discussion 8: October 24, 2018 Solutions. 1 Introduction. 2 Primitives


CS61A Notes Disc 11: Streams Streaming Along

INTERPRETERS AND TAIL CALLS 9

MergeSort, Recurrences, Asymptotic Analysis Scribe: Michael P. Kim Date: September 28, 2016 Edited by Ofir Geri

Data Structures And Algorithms

MergeSort, Recurrences, Asymptotic Analysis Scribe: Michael P. Kim Date: April 1, 2015

Complexity, Induction, and Recurrence Relations. CSE 373 Help Session 4/7/2016

Class 6: Efficiency in Scheme

CS61A Summer 2010 George Wang, Jonathan Kotker, Seshadri Mahalingam, Eric Tzeng, Steven Tang

Complexity of Algorithms. Andreas Klappenecker

RECURSION 7. 1 Recursion COMPUTER SCIENCE 61A. October 15, 2012

Class Structure. Prerequisites

Reading 8 : Recursion

q To develop recursive methods for recursive mathematical functions ( ).

RUNNING TIME ANALYSIS. Problem Solving with Computers-II

q To develop recursive methods for recursive mathematical functions ( ).

CS450 - Structure of Higher Level Languages

What did we talk about last time? Finished hunters and prey Class variables Constants Class constants Started Big Oh notation

An introduction to Scheme

SCHEME AND CALCULATOR 5b

APCS-AB: Java. Recursion in Java December 12, week14 1

Recursion Chapter 3.5

Lecture 2: Getting Started

Introduction to Programming I

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

6.001 Notes: Section 15.1

Code example: sfact SICP Explicit-control evaluator. Example register machine: instructions. Goal: a tail-recursive evaluator.

CSCI-1200 Data Structures Spring 2018 Lecture 7 Order Notation & Basic Recursion

Tail Recursion. ;; a recursive program for factorial (define fact (lambda (m) ;; m is non-negative (if (= m 0) 1 (* m (fact (- m 1))))))

COMP 161 Lecture Notes 16 Analyzing Search and Sort

11/2/2017 RECURSION. Chapter 5. Recursive Thinking. Section 5.1

Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

memoization or iteration over subproblems the direct iterative algorithm a basic outline of dynamic programming

RECURSION, RECURSION, (TREE) RECURSION! 2

6.001 Notes: Section 8.1

SCHEME 8. 1 Introduction. 2 Primitives COMPUTER SCIENCE 61A. March 23, 2017

TAIL CALLS, ITERATORS, AND GENERATORS 11

CSE 341 Lecture 5. efficiency issues; tail recursion; print Ullman ; 4.1. slides created by Marty Stepp

SCHEME 10 COMPUTER SCIENCE 61A. July 26, Warm Up: Conditional Expressions. 1. What does Scheme print? scm> (if (or #t (/ 1 0)) 1 (/ 1 0))

Recursion. Chapter 17 CMPE13. Cyrus Bazeghi

Choice of C++ as Language

Recursion & Performance. Recursion. Recursion. Recursion. Where Recursion Shines. Breaking a Problem Down

Computer Algorithms. Introduction to Algorithm

# track total function calls using a global variable global fibcallcounter fibcallcounter +=1

Computer Science 210 Data Structures Siena College Fall Topic Notes: Complexity and Asymptotic Analysis

Discussion 11. Streams

Chapter 1. Fundamentals of Higher Order Programming

Divide & Conquer. 2. Conquer the sub-problems by solving them recursively. 1. Divide the problem into number of sub-problems

RECURSION, RECURSION, (TREE) RECURSION! 3

Lesson 12: Recursion, Complexity, Searching and Sorting. Modifications By Mr. Dave Clausen Updated for Java 1_5

6.001 Recitation 23: Register Machines and Stack Frames

CSE 143. Complexity Analysis. Program Efficiency. Constant Time Statements. Big Oh notation. Analyzing Loops. Constant Time Statements (2) CSE 143 1

Scheme Basics > (butfirst '(help!)) ()

Problem Set CVO 103, Spring 2018

Problem Set 4: Streams and Lazy Evaluation

1.3. Conditional expressions To express case distinctions like

Announcements. Recursion and why study it. Recursive programming. Recursion basic idea

CS 6402 DESIGN AND ANALYSIS OF ALGORITHMS QUESTION BANK

Chapter 17 Recursion

Recursion. Dr. Jürgen Eckerle FS Recursive Functions

Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute. Module 02 Lecture - 45 Memoization

Introduction to Analysis of Algorithms

STREAMS AND REVIEW 12

CSE 101- Winter 18 Discussion Section Week 6

(Refer Slide Time: 1:27)

Complexity of Algorithms

SCHEME 7. 1 Introduction. 2 Primitives COMPUTER SCIENCE 61A. October 29, 2015

Data Structures. Chapter 06. 3/10/2016 Md. Golam Moazzam, Dept. of CSE, JU

Recursion. Chapter 7

Dynamic Programming. Nothing to do with dynamic and nothing to do with programming.

def F a c t o r i a l ( n ) : i f n == 1 : return 1 else : return n F a c t o r i a l ( n 1) def main ( ) : print ( F a c t o r i a l ( 4 ) )

recursive algorithms 1

Algorithm Analysis CISC4080 CIS, Fordham Univ. Instructor: X. Zhang

Mathematical Induction

Fall 2017 Discussion 7: October 25, 2017 Solutions. 1 Introduction. 2 Primitives

CPS 506 Comparative Programming Languages. Programming Language Paradigm

An algorithm is a sequence of instructions that one must perform in order to solve a wellformulated

CS/CE 2336 Computer Science II

Copyright 2012 by Pearson Education, Inc. All Rights Reserved.

Advanced Algorithms and Data Structures

To explain growth rates and why constants and nondominating terms can be ignored in the estimation

Chapter 18 Recursion. Motivations

Math 4242 Fibonacci, Memoization lecture 13

Transcription:

Orders of Growth of Processes Today s topics Resources used by a program to solve a problem of size n Time Space Define order of growth Visualizing resources utilization using our model of evaluation Relating types of programs to orders of growth Resources matter How many seconds in a year? 60 * 60 * 24 * 365 * 0 7 Lifetime of Universe (since Big Bang ) 0 0 years Computer clock 3 * 0 9 operations/sec Operations since big bang * 0 7 * 0 0 * 3 * 0 9 0 27 Number of atoms in the universe 0 79 Imagine that every atom is a contemporary computer! Total number of operations 0 06 Shannon/Knuth: Number of possible chess games 0 20 Lloyd QM universe lifetime: 0 22 ops on 0 90 bits /36 2/36 Orders of growth of processes R(n)= (n 2 ) Suppose n is a parameter that measures the size of a problem Let R(n) be the amount of resources needed to compute a procedure of size n. We say R(n) has order of growth (f(n)) if there are constants k and k 2 such that k f(n)<= R(n)<= k 2 f(n) for large n (> k) Two common resources are space, measured by the number of deferred operations, and time, measured by the number of primitive steps. k f(n) R(n) k 2 f(n) when n>k k k 2 We need to specify what is primitive typically we will use simple arithmetic operations and simple data structure operations (to be defined next time) 3/36 k 4/36 Partial trace for (fact 4) (define fact (lambda (n) (if (= n ) (* n (fact (- n )))))) (fact 4) (if (= 4 ) (* 4 (fact (- 4 )))) (* 4 (fact 3)) (* 4 (if (= 3 ) (* 3 (fact (- 3 ))))) (* 4 (* 3 (fact 2))) (* 4 (* 3 (if (= 2 ) (* 2 (fact (- 2 )))))) (* 4 (* 3 (* 2 (fact )))) (* 4 (* 3 (* 2 (if (= ) (* (fact (- ))))))) (* 4 (* 3 (* 2 ))) (* 4 (* 3 2)) (* 4 6) 24 Partial trace for (ifact 4) (define ifact-helper (lambda (product count n) (if (> count n) product (ifact-helper (* product count) (+ count ) n)))) (define ifact (lambda (n) (ifact-helper n)) (ifact 4) (ifact-helper 4) (if (> 4) (ifact-helper (* ) (+ ) 4)) (ifact-helper 2 4) (if (> 2 4) (ifact-helper (* 2) (+ 2 ) 4)) (ifact-helper 2 3 4) (if (> 3 4) 2 (ifact-helper (* 2 3) (+ 3 ) 4)) (ifact-helper 6 4 4) (if (> 4 4) 6 (ifact-helper (* 6 4) (+ 4 ) 4)) (ifact-helper 24 5 4) (if (> 5 4) 24 (ifact-helper (* 24 5) (+ 5 ) 4)) 24 5/36 6/36

Examples of orders of growth (define fact (lambda (n) (if (= n ) FACT (* n (fact (- n )))))) Space (n) linear (n- deferred ops) Time (n) linear (2(n-) primitive ops) Computing Fibonacci Consider the following function F(n) = 0 if n = 0 F(n) = if n = F(n) = F(n-) + F(n-2) otherwise (define ifact-helper (lambda (product count n) (if (> count n) product (ifact-helper (* product count) (+ count ) n)))) (define ifact IFACT (lambda (n) (ifact-helper n)) Space () constant Time (n) linear (2n primitive ops) 7/36 8/36 Fibonacci A tree recursion (define fib (lambda (n) (cond ((= n 0) 0) ((= n ) ) (else (+ (fib (- n )) (fib (- n 2))))))) Fib 4 Fib 3 Fib 2 New expression: (cond (<predicate> <consequent> <consequent> ) (<predicate2> <consequent> <consequent> ) (else <consequent> <consequent>)) Fib 2 Fib Fib 0 Fib Fib Fib 0 9/36 0/36 Orders of growth for Fibonacci (define fib (lambda (n) (cond ((= n 0) 0) ((= n ) ) (else (+ (fib (- n )) (fib (- n 2))))))) Let t n be the number of steps that we need to take to solve the case for size n. Then t n = t n- + t n-2 = 2 t n-2 = 4 t n-4 = 8 t n-6 = 2 n/2 So in time we have (2 n ) -- exponential In space, we have one deferred operation for each increment of the argument -- (n) -- linear Towers of Hanoi Three posts, and a set of different size disks any stack must be sorted in decreasing order from bottom to top the goal is to move the disks one at a time, while preserving these conditions, until the entire stack has moved from one post to another /36 2/36 2

Towers of Hanoi (define move-tower (lambda (size from to extra) (cond ((= size 0) #t) (else (move-tower (- size ) from extra to) (print-move from to) (move-tower (- size ) extra to from))))) (define print-move (lambda (from to) (display "Move top disk from ") (display from) (display " to ") (display to) (newline))) Small Towers of Hanoi problem (move-tower 3 2 3) Move top disk from to 3 Move top disk from 3 to (move-tower 5 2 3) Move top disk from to 3 Move top disk from 3 to Move top disk from to 3 Move top disk from 2 to Move top disk from 3 to Move top disk from to 3 Move top disk from 3 to Move top disk from 3 to Move top disk from 2 to Move top disk from 3 to Move top disk from to 3 Move top disk from 3 to 3/36 4/36 A tree recursion Move 4 Move 3 Move 3 Move 2 Move 2 Move 2 Move 2 Orders of growth for towers of Hanoi Let t n be the number of steps that we need to take to solve the case for n disks. Then t n = 2t n- + = 2(2t n-2 +) + = 2 n - So in time we have (2 n ) -- exponential In space, we have one deferred operation for each increment of the stack of disks -- (n) -- linear Move Move Move Move Move Move Move Move 5/36 6/36 We want to compute a^b, just using multiplication and addition Remember our stages: Wishful thinking Decomposition Smallest sized subproblem Wishful thinking Assume that the procedure my-expt exists, but only solves smaller versions of the same problem Decompose problem into solving smaller version and using result a^b = a*a* *a = a*a^(b-) (define my-expt (* a (my-expt a (- b ))))) 7/36 8/36 3

Identify smallest size subproblem a^0 = (define my-expt (if (= b 0) (* a (my-expt a (- b )))))) (define my-expt (if (= b 0) (* a (my-expt a (- b )))))) Orders of growth Time: linear Space: linear 9/36 20/36 Are there other ways to decompose this problem? Use the idea of state variables, and table evolution 2/36 Iterative algorithm to compute a^b as a table In this table: One column for each piece of information used One row for each step first row handles a^0 product counter a cleanly b a a b- a a^2 b-2 a product * a a^3 b-3 a counter - answer a^4 b-4 a The last row is the one where counter = 0 The answer is in the product column of the last row 22/36 Iterative algorithm to compute a^b (define exp-i (exp-i-help b a))) Iterative algorithm to compute a^b (define exp-i (exp-i-help b a))) (define exp-i-help (lambda (prod count a) (if (= count 0) prod (exp-i-help (* prod a) (- count ) a)))) 23/36 (define exp-i-help (lambda (prod count a) (if (= count 0) prod (exp-i-help (* prod a) (- count ) a)))) Orders of growth Space: constant Time: linear 24/36 4

Another kind of process Let s compute a b just using multiplication and addition If b is even, then a b = (a 2 ) (b/2) If b is odd, then a b = a* a (b-) Note that here, we reduce the problem in half in one step (define fast-exp- (cond ((= b ) a) ((even? b) (fast-exp- (* a a) (/ b 2))) (else (* a (fast-exp- a (- b ))))))) Orders of growth (define fast-exp- (cond ((= b ) a) ((even? b) (fast-exp- (* a a) (/ b 2))) (else (* a (fast-exp- a (- b ))))))) If n even, then step reduces to n/2 sized problem If n odd, 2 steps reduces to n/2 sized problem Thus in 2k steps reduces to n/2 k sized problem We are done when the problem size is just, which implies order of growth in time of (log n) -- logarithmic Space is similarly (log n) -- logarithmic 25/36 26/36 Another example of different processes Fun with Pascal s Triangle Suppose we want to compute the elements of Pascal s triangle 2 3 3 4 6 4 5 0 0 5 6 5 20 5 6 2 3 5 8 3 2 3 3 4 6 4 5 0 0 5 6 5 20 5 6 27/36 28/36 Pascal s triangle We need some notation Let s order the rows, starting with n=0 for the first row The nth row then has n+ elements, also ordered from 0 Let s use P(j,n) to denote the jth element of the nth row. We want to find ways to compute P(j,n) for any n, and any j, such that 0 <= j <= n Pascal s triangle the traditional way Traditionally, one thinks of Pascal s triangle being formed by the following informal method: The first element of a row is The last element of a row is To get the second element of a row, add the first and second element of the previous row To get the k th element of a row, add the (k-) st and k th element of the previous row 29/36 30/36 5

Pascal s triangle the traditional way Here is a procedure that just captures that idea: (cond ((= j 0) ) ((= j n) ) (else (+ (pascal (- j ) (- n )) (pascal j (- n ))))))) Pascal s triangle the traditional way (cond ((= j 0) ) ((= j n) ) (else (+ (pascal (- j ) (- n ) (pascal j (- n ))))))) What kind of process does this generate? Looks a lot like fibonacci There are two recursive calls to the procedure in the general case In fact, this has a time complexity that is exponential and a space complexity that is linear 3/36 32/36 Can we do better? Yes, but we need to do some thinking. Pascal s triangle actually captures the idea of how many different ways there are of choosing objects from a set, where the order of choice doesn t matter. P(0, n) is the number of ways of choosing collections of no objects, which is trivially. P(n, n) is the number of ways of choosing collections of n objects, which is obviously, since there is only one set of n things. P(j, n) is the number of ways of picking sets of j objects from a set of n objects. So what is the number of ways of picking sets of j objects from a set of n objects? Pick the first one there are n possible choices Then pick the second one there are (n-) choices left. Keep going until you have picked j objects n! n( n )...( n j + ) = ( n j)! But the order in which we pick the objects doesn t matter, and there are j! different orders, so we have n! n( n )...( n j + ) = ( n j)! j! j( j )... 33/36 34/36 So here is an easy way to implement this idea: (/ (fact n) (* (fact (- n j)) (fact j))))) What is complexity of this approach? Three different evaluations of fact Each is linear in time and in space So combination takes 3n steps, which is also linear in time; and has at most n deferred operations, which is also linear in space What about computing with a different version of fact? (/ (ifact n) (* (ifact (- n j)) (ifact j))))) What is complexity of this approach? Three different evaluations of fact Each is linear in time and constant in space So combination takes 3n steps, which is also linear in time; and has no deferred operations, which is also constant in space 35/36 36/36 6

Solving the same problem the direct way Now, why not just do the computation directly? (/ (help n (+ n (- j) )) (help j )))) (define help (lambda (k prod end) (if (= k end) (* k prod) (help (- k ) (* prod k) end)))) n! n( n )...( n j + ) = ( n j)! j! j( j )... Solving the same problem the direct way So what is complexity here? Help is an iterative procedure, and has constant space and linear time This version of Pascal only uses two versions of help (as opposed the previous version that used three versions of ifact). In practice, this means this version uses fewer multiplies that the previous one, but it is still linear in time, and hence has the same order of growth. 37/36 38/36 So why do these orders of growth matter? Main concern is general order of growth Exponential is very expensive as the problem size grows. Some clever thinking can sometimes convert an inefficient approach into a more efficient one. In practice, actual performance may improve by considering different variations, even though the overall order of growth stays the same. 39/36 7