Lecture 12: Dynamic Programming Part 1 10:00 AM, Feb 21, 2018

Similar documents
Dynamic programming 4/9/18

Lecture 23: Priority Queues, Part 2 10:00 AM, Mar 19, 2018

Programming Languages. Streams Wrapup, Memoization, Type Systems, and Some Monty Python

Algorithms. Ch.15 Dynamic Programming

The Citizen s Guide to Dynamic Programming

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Dynamic Programming I Date: 10/6/16

CS 380 ALGORITHM DESIGN AND ANALYSIS

Lecture 8: Iterators and More Mutation

CS 4349 Lecture September 13th, 2017

Unit-5 Dynamic Programming 2016

Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute. Module 02 Lecture - 45 Memoization

Lab 9: More Sorting Algorithms 12:00 PM, Mar 21, 2018

Lecture 21: The Many Hats of Scala: OOP 10:00 AM, Mar 14, 2018

# track total function calls using a global variable global fibcallcounter fibcallcounter +=1

School of Informatics, University of Edinburgh

Dynamic Programming I

Lecture 6 CS2110 Spring 2013 RECURSION

Lecture 16: HashTables 10:00 AM, Mar 2, 2018

RECURSION. Many Slides from Ken Birman, Cornell University

Lecture 26: Graphs: Traversal (Part 1)

Lecture 5: Implementing Lists, Version 1

Math 4242 Fibonacci, Memoization lecture 13

Lecture 11: In-Place Sorting and Loop Invariants 10:00 AM, Feb 16, 2018

CSE373: Data Structure & Algorithms Lecture 23: More Sorting and Other Classes of Algorithms. Catie Baker Spring 2015

CMSC 451: Lecture 10 Dynamic Programming: Weighted Interval Scheduling Tuesday, Oct 3, 2017

Algorithmics. Some information. Programming details: Ruby desuka?

Lecture Transcript While and Do While Statements in C++

CS 231: Algorithmic Problem Solving

Lecture 17: Implementing HashTables 10:00 AM, Mar 5, 2018

The Art of Recursion: Problem Set 10

Spring 2012 Homework 10

CS450 - Structure of Higher Level Languages

Lab 5: Java IO 12:00 PM, Feb 21, 2018

We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real

Homework 4: Hash Tables Due: 5:00 PM, Mar 9, 2018

CS 206 Introduction to Computer Science II

Algorithm Design Techniques part I

COMP4128 Programming Challenges

Problem solving paradigms

Algorithm Design and Recursion. Search and Sort Algorithms

Dynamic Programming. Introduction, Weighted Interval Scheduling, Knapsack. Tyler Moore. Lecture 15/16

5.1 The String reconstruction problem

Homework 2: Imperative Due: 5:00 PM, Feb 15, 2019

Recursive-Fib(n) if n=1 or n=2 then return 1 else return Recursive-Fib(n-1)+Recursive-Fib(n-2)

k-selection Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong

6.001 Notes: Section 4.1

Lecture Notes 4 More C++ and recursion CSS 501 Data Structures and Object-Oriented Programming Professor Clark F. Olson

Recursion CSCI 136: Fundamentals of Computer Science II Keith Vertanen Copyright 2011

CSE 230 Intermediate Programming in C and C++ Recursion

The Knapsack Problem an Introduction to Dynamic Programming. Slides based on Kevin Wayne / Pearson-Addison Wesley

Lecture Notes on Dynamic Programming

1 Dynamic Memory continued: Memory Leaks

CS 151. Recursion (Review!) Wednesday, September 19, 12

ESO207A: Data Structures and Algorithms End-semester exam

Lecture 7 CS2110 Fall 2014 RECURSION

EECE.3170: Microprocessor Systems Design I Summer 2017 Homework 4 Solution

This session. Recursion. Planning the development. Software development. COM1022 Functional Programming and Reasoning

Lecture 14: Exceptions 10:00 AM, Feb 26, 2018

CMSC 451: Dynamic Programming

CMPSCI 250: Introduction to Computation. Lecture #14: Induction and Recursion (Still More Induction) David Mix Barrington 14 March 2013

CS/COE 1501

Introduction to Programming in C Department of Computer Science and Engineering. Lecture No. #13. Loops: Do - While

CSC 373 Lecture # 3 Instructor: Milad Eftekhar

The Running Time of Programs

Practice Problems for the Final

CS201 Discussion 7 MARKOV AND RECURSION

Lecture 4: Inheritence and Abstract Classes

1 Dynamic Programming

Lecture 22: Garbage Collection 10:00 AM, Mar 16, 2018

Dynamic Programming Intro

Memoization/Dynamic Programming. The String reconstruction problem. CS124 Lecture 11 Spring 2018

RUNNING TIME ANALYSIS. Problem Solving with Computers-II

(Provisional) Lecture 08: List recursion and recursive diagrams 10:00 AM, Sep 22, 2017

Programming Principles

CS473-Algorithms I. Lecture 10. Dynamic Programming. Cevdet Aykanat - Bilkent University Computer Engineering Department

The power of logarithmic computations. Recursive x y. Calculate x y. Can we reduce the number of multiplications?

Computer Sciences Department 1

CS1 Lecture 15 Feb. 18, 2019

MergeSort, Recurrences, Asymptotic Analysis Scribe: Michael P. Kim Date: April 1, 2015

Lecture 23: Priority Queues, Part 2 10:00 AM, Mar 19, 2018

CS 61B Discussion Quiz 1. Questions

CS/COE 1501

Homework 3: Recursion Due: 11:59 PM, Sep 25, 2018

CS 310 Advanced Data Structures and Algorithms

Lecture 23: Priority Queues 10:00 AM, Mar 19, 2018

Resources matter. Orders of Growth of Processes. R(n)= (n 2 ) Orders of growth of processes. Partial trace for (ifact 4) Partial trace for (fact 4)

Lecture #5: Higher-Order Functions. A Simple Recursion. Avoiding Recalculation. Redundant Calculation. Tail Recursion and Repetition

Unit #3: Recursion, Induction, and Loop Invariants

CS 173, Running Time Analysis, Counting, and Dynamic Programming. Tandy Warnow

Recursion. Recursion [Bono] 1

CS Algorithms and Complexity

Last week: Breadth-First Search

(Provisional) Lecture 20: OCaml Fun!

Today s Outline. CSE 326: Data Structures Asymptotic Analysis. Analyzing Algorithms. Analyzing Algorithms: Why Bother? Hannah Takes a Break

Two Approaches to Algorithms An Example (1) Iteration (2) Recursion

Computer Science 210 Data Structures Siena College Fall Topic Notes: Complexity and Asymptotic Analysis

9/16/14. Overview references to sections in text RECURSION. What does generic mean? A little about generics used in A3

MergeSort, Recurrences, Asymptotic Analysis Scribe: Michael P. Kim Date: September 28, 2016 Edited by Ofir Geri

Comp215: More Recursion

Algorithms for Data Science

Transcription:

CS18 Integrated Introduction to Computer Science Fisler, Nelson Lecture 12: Dynamic Programming Part 1 10:00 AM, Feb 21, 2018 Contents 1 Introduction 1 2 Fibonacci 2 Objectives By the end of these notes, you will know: ˆ the top-down and bottom-up approaches to dynamic programming By the end of these notes, you will be able to: ˆ apply dynamic programming to implement faster solutions to problems with recursive solutions than a naive recursive implementation 1 Introduction Dynamic programming (DP) is a technique for solving problems whose solutions can be derived from the solutions to their subproblems. Dynamic refers to the idea that what the algorithm computes at each step is based on its earlier computations; program refers not to a computer program, but rather to a schedule of activities. DP is not one specific algorithm. It is an algorithmic strategy (like divide-and-conquer is an algorithmic strategy) that is often applied to solving optimization problems: e.g., path planning. DP is applicable to problems with recursive solutions, specifically when the subproblems into which they decompose overlap. Subproblems overlap when they can be used to solve multiple super problems. For example, the n 2nd Fibonacci number is used to compute both the n 1st and the nth Fibonacci number. The key insight that makes DP efficient is that overlapping subproblems are not solved and re-solved repeatedly; rather intermediate solutions are calculated once, and then stored, so they can be accessed later as needed. DP comes in two flavors: top-down and bottom-up. When using the top-down approach, we use recursion as usual, with two straightforward enhancements: (i) before solving a problem, we check our store (i.e., the data structure in which we are storing intermediate solutions) to see whether we have already solved it; and (ii) after solving a problem, we make sure to update our store with the new solution. This storing of intermediate solutions for potential later use (rather than recalculating them as needed) is called memoization. On the other hand, the bottom-up approach is not recursive; it is iterative. Instead of solving larger problems in terms of smaller ones, as a recursive solution would, we first solve the smaller problems,

and then use those solutions to build up solutions to larger problems. The solutions to smaller problems are again stored, and the solutions to the bigger problems are calculated by looking up those stored solutions. 2 Fibonacci To introduce this new topic, we re going to revisit to an old (but good) topic: the Fibonacci numbers. Remember how the Fibonacci sequence is defined: 0, if n = 0 F (n) = 1, if n = 1 F (n 1) + F (n 2), otherwise (1) Take a careful look at this recurrence relation. To compute F (n), we must compute F (n 1) and F (n 2). And then to compute F (n 1) we must compute F (n 2) and F (n 3). So the computation of both F (n) and F (n 1) require that we compute F (n 2). But in our naive implementation of fib in CS 17, we forgot all about F (n 2) after computing F (n 1), and just recomputed it when we needed it for F (n). Pretty silly. And not just silly: expensive! Without DP: fib(4) fib(3) fib(2) fib(2) fib(0) With memoization: 2

fib(4) fib(3)* fib(2) fib(2)* fib(0) The calls with asterisks are the only ones that require computation since all the non-marked calls will have their values stored already. The fib(0) and calls on the left branch are base cases, so we don t need to find their value, and we ve already found the fib(2) call on the right branch from the call to fib(2) on the left branch. Below, we try two different dynamic-programming approaches, one top-down and the other bottomup, to calculate the nth Fibonacci number. Both of these programs use an array to store solutions to subproblems. As per the Fibonacci recurrence relation, the 0th and 1st elements of the array are initialized to 0 and 1, respectively. Rather than make an array that holds integers, and use a sentinel value to indicate when a subproblem has yet to be computed, we ll use java s OptionalInt type, which either contains an int or a special empty value. If we call the get method on an OptionalInt that is empty, Java will throw an exception preventing a silent failure condition that might arise from using sentinel values. (We ll see other ways to initialize the array in the next lecture.) Both our top-down and bottom-up implementations implement IFibonacci, an interface for Fibonacci numbers, which declares only one method fib, with parameter n, which returns the nth Fibonacci number. Top-Down Fibonacci Our naive implementation of fib calls itself an exponential number of times; but it only computes n Fibonacci numbers! This redundancy is clearly unnecessary. A better solution would simply remember the answers to the questions it was asked previously and return those answers (in constant time) when those same questions are asked again. This idea is called memoization; it is a technique for improving the run time of an algorithm. An important step when taking a top-down approach to DP is to figure out what to memoize: i.e., what information to store for later use. In the case of fib, this step is pretty clear: we should memoize solutions to smaller problems. Since smaller problems are characterized by integers from 0 to n 1, we ll just use an array of size n. For example, once computed, the 0th entry in such an array would contain 0; the 1st entry would contain 1;..., the 5th entry would contain 5; the 6th entry would contain 8; and so on. 3

0 1 2 3 4 5 6 0 1 1 2 3 5 8... We implement this top-down approach to DP below. As per the Fibonacci recurrence relation, the 0th and 1st elements of the array are initialized to 0 and 1, respectively. Now, when fib is called on input n, if the nth Fibonacci number has been memoized (i.e., if fibcomputed[n]), then the memoized value is returned immediately. If it has not, the nth Fibonacci number is calculated (by calling fib(n-1) and fib(n-2), which may have been stored in fibarray for later use), and then returned. This DP algorithm runs in time O(n), because fib is never called twice on the same input, and there are only n possible inputs. package lec12; import java.util.optionalint; /** * A class that computes the Fibonacci numbers using top - down dynamic * programming. */ public class TDFibonacci implements IFibonacci { @Override public int fib(int n) { if(n == 0) return 0; if(n == 1 n == 2) return 1; OptionalInt fibtable[] = new OptionalInt[n+1]; for(int i=0;i<=n;i++) { fibtable[i] = OptionalInt.empty(); fibtable[0] = OptionalInt.of(0); fibtable[1] = OptionalInt.of(1); fibtable[2] = OptionalInt.of(1); return fibhelper(n, fibtable); int fibhelper(int n, OptionalInt[] fibtable) { if (!fibtable[n].ispresent()) { fibtable[n] = OptionalInt.of(fibHelper(n - 1, fibtable) + fibhelper(n - 2, fibtable)); return fibtable[n].getasint(); 4

Caching is a generalization of memoization. In English, a cache is a hiding place for storing valuables, like jewelry or ammunition. In CS, a cache is a not a hiding place, but it is a place for storing valuables not jewelry or ammunition, but values you expect to need again later, and don t want to have to retrieve again from their original source (e.g., a computation or an image retrieved from a web server). By caching values you save time precisely because you do not have to retrieve those values again. Because a cache is of finite size, the problem of what values to store in a cache is an important, and well-studied, optimization problem. (Take CSCI 167 for more information about caching strategies.) Bottom-Up Fibonacci So far, we have taken a top-down approach: we have used memoization to speed up the original recursive algorithm. What we will now do is take a bottom-up approach. To do so, we will design a brand new algorithm that works, as the name suggests, from the bottom up. This means that we will solve the subproblems first, store their solutions in a table, and when all the subproblems have been solved, we will use that table to solve the problem itself. In the recursive solution with memoization, we created a table and let the recursive algorithm store values in that table in whatever order naturally arose from its sequence of recursive calls. Now, we will pick the order in which values are calculated (and hence stored) more intelligently. That is, we will ask the question: in what order should we fill in the table? In the Fibonacci case, the answer is rather trivial: since the value at each index depends on the availability of the values at the two previous indices, we can simply compute the values in increasing index order, starting from 0. This is a very natural way to compute fib(n): Create an array of size n + 1, set the 0th element to be 0 and the 1st element to be 1, then compute the 2nd from those, then compute the 3rd from the entries you have already computed, and so on. Here is the corresponding implementation: package lec12; import java.util.optionalint; /** * A class that computes the Fibonacci numbers using bottom - up dynamic * programming. */ class BUFibonacci implements IFibonacci { @Override public int fib(int n) { if(n == 0) return 0; if(n == 1 n == 2) return 1; OptionalInt[] fibtable = new OptionalInt[n+1]; for(int i=0;i<=n;i++) { fibtable[i] = OptionalInt.empty(); fibtable[0] = OptionalInt.of(0); fibtable[1] = OptionalInt.of(1); fibtable[2] = OptionalInt.of(1); for (int i = 2; i <= n; i++) fibtable[i] = OptionalInt.of(fibTable[i - 1]. 5

getasint() + fibtable[i - 2].getAsInt()); return fibtable[n].getasint(); Computer scientists often trade off time efficiency for space efficiency. Our DP implementations of fib run much faster than a pure and simple recursive implementation would, but they use more space. But if we take a closer look at our bottom-up DP solution to fib, we find that we only actually need the previous two numbers to compute the next number. (Recall fast-fib, the CS 17 program for computing Fibonacci numbers that used an accumulator.) Thus, there is an even more space-efficient implementation that doesn t require an entire array, but only 2 variables! When taking a bottom-up DP approach to solving a problem, if you find that you don t continually need the solutions to all subproblems, you should feel free to throw away the ones you don t need any more. At this point, you might be wondering: Which approach to DP is better: top-down or bottom-up? The answer is unclear. Theoretically, the top-down approach is always at least as good as the bottom-up approach in terms of its asymptotic time complexity. For example, suppose our recurrence relation were F (n) = F (n 10) + F (n 20). Here, the top-down approach only computes the necessary values, whereas the bottom-up approach computes all values. But the bottom-up approach can often reveal opportunities for improved space-efficiency. Please let us know if you find any mistakes, inconsistencies, or confusing language in this or any other CS18 document by filling out the anonymous feedback form: http://cs.brown.edu/ courses/cs018/feedback. 6