CMSC 451: Dynamic Programming

Similar documents
memoization or iteration over subproblems the direct iterative algorithm a basic outline of dynamic programming

CMSC 451: Lecture 10 Dynamic Programming: Weighted Interval Scheduling Tuesday, Oct 3, 2017

Dynamic Programming. Introduction, Weighted Interval Scheduling, Knapsack. Tyler Moore. Lecture 15/16

Lectures 12 and 13 Dynamic programming: weighted interval scheduling

Chapter 6. Dynamic Programming. Modified from slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Algorithmic Paradigms

Algorithm Design and Analysis

Dynamic Programming. Applications. Applications. Applications. Algorithm Design 6.1, 6.2, 6.3

CMSC 451: Divide and Conquer

Dynamic Programming. Nothing to do with dynamic and nothing to do with programming.

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Dynamic Programming I Date: 10/6/16

Chapter 6. Dynamic Programming

The Knapsack Problem an Introduction to Dynamic Programming. Slides based on Kevin Wayne / Pearson-Addison Wesley

Dynamic Programmming: Activity Selection

Algorithms for Data Science

CMSC Introduction to Algorithms Spring 2012 Lecture 16

Algorithm Design Techniques part I

CSC 373 Lecture # 3 Instructor: Milad Eftekhar

Design and Analysis of Algorithms

CS 473: Fundamental Algorithms, Spring Dynamic Programming. Sariel (UIUC) CS473 1 Spring / 42. Part I. Longest Increasing Subsequence

Algorithmic Paradigms. Chapter 6 Dynamic Programming. Steps in Dynamic Programming. Dynamic Programming. Dynamic Programming Applications

LECTURE NOTES OF ALGORITHMS: DESIGN TECHNIQUES AND ANALYSIS

12 Dynamic Programming (2) Matrix-chain Multiplication Segmented Least Squares

Applied Algorithm Design Lecture 3

Elements of Dynamic Programming. COSC 3101A - Design and Analysis of Algorithms 8. Discovering Optimal Substructure. Optimal Substructure - Examples

CS 380 ALGORITHM DESIGN AND ANALYSIS

Greedy Algorithms Huffman Coding

CSE 421: Introduction to Algorithms

CS473-Algorithms I. Lecture 10. Dynamic Programming. Cevdet Aykanat - Bilkent University Computer Engineering Department

Special Topics on Algorithms Fall 2017 Dynamic Programming. Vangelis Markakis, Ioannis Milis and George Zois

We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real

Union Find and Greedy Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 8

Chapter 16. Greedy Algorithms

Design and Analysis of Algorithms

Solving NP-hard Problems on Special Instances

Algorithmic patterns

Outline. CS38 Introduction to Algorithms. Fast Fourier Transform (FFT) Fast Fourier Transform (FFT) Fast Fourier Transform (FFT)

1 Non greedy algorithms (which we should have covered

Unit-5 Dynamic Programming 2016

Design and Analysis of Algorithms

Department of Computer Applications. MCA 312: Design and Analysis of Algorithms. [Part I : Medium Answer Type Questions] UNIT I

Algorithms Dr. Haim Levkowitz

Dynamic Programming II

Greedy Algorithms 1. For large values of d, brute force search is not feasible because there are 2 d

4.1 Interval Scheduling

Chapter 3 Dynamic programming

Efficient Sequential Algorithms, Comp309. Problems. Part 1: Algorithmic Paradigms

Subsequence Definition. CS 461, Lecture 8. Today s Outline. Example. Assume given sequence X = x 1, x 2,..., x m. Jared Saia University of New Mexico

1 Dynamic Programming

CMPS 2200 Fall Dynamic Programming. Carola Wenk. Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk

IN101: Algorithmic techniques Vladimir-Alexandru Paun ENSTA ParisTech

CMSC 451: Linear Programming

CPSC W2 Midterm #2 Sample Solutions

Greedy Algorithms. Alexandra Stefan

PSD1A. DESIGN AND ANALYSIS OF ALGORITHMS Unit : I-V

CSE 100: B+ TREE, 2-3 TREE, NP- COMPLETNESS

ESO207A: Data Structures and Algorithms End-semester exam

Algorithm classification

Greedy Algorithms CLRS Laura Toma, csci2200, Bowdoin College

Practice Problems for the Final

Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, Dynamic Programming

A4B33ALG 2015/10 ALG 10. Dynamic programming. abbreviation: DP. Sources, overviews, examples see.

CPE702 Algorithm Analysis and Design Week 7 Algorithm Design Patterns

CS583 Lecture 10. Graph Algorithms Shortest Path Algorithms Dynamic Programming. Many slides here are based on D. Luebke slides.

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING QUESTION BANK UNIT-III. SUB NAME: DESIGN AND ANALYSIS OF ALGORITHMS SEM/YEAR: III/ II PART A (2 Marks)

Algorithms. Ch.15 Dynamic Programming

Chapter 16: Greedy Algorithm

Last week: Breadth-First Search

Greedy algorithms is another useful way for solving optimization problems.

CS2223 Algorithms B Term 2013 Exam 3 Solutions

So far... Finished looking at lower bounds and linear sorts.

CMSC 754 Computational Geometry 1

Greedy Algorithms 1 {K(S) K(S) C} For large values of d, brute force search is not feasible because there are 2 d {1,..., d}.

Lecture 10. Sequence alignments

CSC 505, Spring 2005 Week 6 Lectures page 1 of 9

Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology. Assignment

1 Dynamic Programming

CS 170 DISCUSSION 8 DYNAMIC PROGRAMMING. Raymond Chan raychan3.github.io/cs170/fa17.html UC Berkeley Fall 17

CS173 Longest Increasing Substrings. Tandy Warnow

CSE 101 Homework 5. Winter 2015

Dynamic Programming. An Introduction to DP

Dynamic Programming. Lecture Overview Introduction

Computer Sciences Department 1

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018

CS 231: Algorithmic Problem Solving

10. EXTENDING TRACTABILITY

15.4 Longest common subsequence

15.Dynamic Programming

Algorithms for Integer Programming

CS Algorithms and Complexity

TU/e Algorithms (2IL15) Lecture 2. Algorithms (2IL15) Lecture 2 THE GREEDY METHOD

More Dynamic Programming

CSE 421: Introduction to Algorithms

Introduction to Algorithms

Robert Collins CSE586 CSE 586, Spring 2009 Advanced Computer Vision

Design and Analysis of Algorithms 演算法設計與分析. Lecture 7 April 15, 2015 洪國寶

Memoization/Dynamic Programming. The String reconstruction problem. CS124 Lecture 11 Spring 2018

11. APPROXIMATION ALGORITHMS

1 Format. 2 Topics Covered. 2.1 Minimal Spanning Trees. 2.2 Union Find. 2.3 Greedy. CS 124 Quiz 2 Review 3/25/18

CSC 373: Algorithm Design and Analysis Lecture 3

Transcription:

CMSC 41: Dynamic Programming Slides By: Carl Kingsford Department of Computer Science University of Maryland, College Park Based on Sections 6.1&6.2 of Algorithm Design by Kleinberg & Tardos.

Dynamic Programming Dynamic Programming Our 3rd major algorithm design technique Similar to divide & conquer Build up the answer from smaller subproblems More general than simple divide & conquer Also more powerfulcy Generally applies to algorithms where the brute force algorithm would be exponential.

Weighted Interval Scheduling Recall the interval scheduling problem we ve seen several times: choose as many non-overlapping intervals as possible. What if each interval had a value? Problem (Weighted Interval Scheduling) Given a set of n intervals (s i, f i ), each with a value v i, choose a subset S of non-overlapping intervals with i S v i maximized.

Example 1 2 s1 f1 v2 = 3 3 v3 = 1 Note that our simple greedy algorithm for the unweighted case doesn t work. This is becasue some interval can be made very important with a high weight.

Greedy Algorithm For Unweighted Case Greedy Algorithm For Unweighted Case: 1 Sort by increasing finishing time 2 Repeat until no intervals left: 1 Choose next interval 2 Remove all intervals it overlaps with

Just look for the value of the OPT Suppose for now we re not interested in the actual set of intervals. Only interested in the value of a solution (aka it s cost, score, objective value). This is typical of DP algorithms: You want to find a solution that optimizes some value. You first focus on just computing what that optimal value would be. E.g. what s the highest value of a set of compatible intervals? You then post-process your answer (and some tables you ve created along the way) to get the actual solution.

Another View Another way to look at Weighted Interval Scheduling: Assume that the intervals are sorted by finishing time and represent each interval by its value. Goal is to choose a subset of the values of maximum sum, so that none of the chosen ( ) intervals overlap: v 1 v 2 v 3 v 4 v n 1 v n X X X

Notation Definition p(j) = the largest i < j such that interval i doesn t overlap with j. 1 2 3 4 6 p(1) = p(2) = p(3) = 1 p(4) = p() = 3 p(6) = 3 p(j) is the interval farthest to the right that is compatible with j.

What does an OPT solution look like? Let OPT be an optimal solution. Let n be the last interval. Yes Does OPT contain interval n? No OPT = n + Optimal solution on {1,...,p(n)} OPT = optimal solution on {1,..., n-1}

Generalize Definition OPT(j) = the optimal solution considering only intervals 1,..., j v j + OPT (p(j)) j in OPT solution OPT (j) = max OPT (j 1) j not in solution j = This kind of recurrence relation is very typical of dynamic programming.

Slow Implementation Implementing the recurrence directly: WeightedIntSched(j): If j = : Return Else: Return max( v[j] + WeightedIntSched(p[j]), WeightedIntSched(j-1) ) Unfortunately, this is exponential time!

Why is this exponential time? Consider this set of intervals: p(j) = j - 2 for all j 3 n What s the shortest path from the root to a leaf? n-2 n-1 n-4 n-3 n-3 n-2 Total # nodes is 2 n/2 Each node does constant work = Ω(2 n )

Why is this exponential time? Consider this set of intervals: p(j) = j - 2 for all j 3 n n-2 n-1 n-4 n-3 n-3 n-2 What s the shortest path from the root to a leaf? n/2 Total # nodes is 2 n/2 Each node does constant work = Ω(2 n )

Memoize Problem: Repeatedly solving the same subproblem. Solution: Save the answer for each subproblem as you compute it. When you compute OPT (j), save the value in a global array M.

Memoize Code MemoizedIntSched(j): If j = : Return Else If M[j] is not empty: Return M[j] Else M[j] = max( v[j] + MemoizedIntSched(p[j]), MemoizedIntSched(j-1) ) Return M[j] Fill in 1 array entry for every two calls to MemoizedIntSched. = O(n)

Easier Algorithm When we compute M[j], we only need values for M[k] for k < j: ForwardIntSched(j): M[] = for j = 1,..., n: M[j] = max(v[j] + M[p(j)], M[j-1]) Main Idea of Dynamic Programming: solve the subproblems in an order that makes sure when you need an answer, it s already been computed.

Example 1 2 3 4 1 1 2 3 4 v j + M[p(j)] M[j-1]

Example 1 2 3 4 1 1 2 3 4 v j + M[p(j)] M[j-1]

Example 1 2 3 4 1 1 2 3 4 v j + M[p(j)] M[j-1]

Example 1 2 3 4 1 1 2 3 4 v j + M[p(j)] M[j-1] 1

Example 1 2 3 4 1 3 1 2 3 4 v j + M[p(j)] M[j-1] 1 3

Example 1 2 3 4 1 3 3 1 2 3 4 v j + M[p(j)] M[j-1] 1 3 3 3

General DP Principles 1 Optimal value of the original problem can be computed easily from some subproblems. 2 There are only a polynomial # of subproblems. 3 There is a natural ordering of the subproblems from smallest to largest such that you can obtain the solution for a subproblem by only looking at smaller subproblems.

General DP Principles 1 Optimal value of the original problem can be computed easily from some subproblems. OPT(j) = max of two subproblems 2 There are only a polynomial # of subproblems. {1,..., j} for j = 1,..., n. 3 There is a natural ordering of the subproblems from smallest to largest such that you can obtain the solution for a subproblem by only looking at smaller subproblems. {1, 2, 3} is smaller than {1, 2, 3, 4}

Getting the actual solution We now have an algorithm to find the value of OPT. How do we get the actual choices of intervals? Interval j is in the optimal solution for the subproblem on intervals {1,..., j} only if v j + OPT (p(j)) OPT (j 1) So, interval n is in the optimal solution only if v[n] + M[p[n]] M[n 1] After deciding if n is in the solution, we can look at the relevant subproblem: either {1,..., p(n)} or {1,..., n 1}.

Example 1 2 3 4 1 3 3 1 2 3 4 v j + M[p(j)] M[j-1] 1 3 3 3

Example 1 2 3 4 1 3 3 1 2 3 4 v j + M[p(j)] M[j-1] 1 3 3 3

Example 1 2 3 4 1 3 3 1 2 3 4 v j + M[p(j)] M[j-1] 1 3 3 3

Example 1 2 3 4 1 3 3 1 2 3 4 v j + M[p(j)] M[j-1] 1 3 3 3

Code BacktrackForSolution(M, j): If j > : If v[j] + M[p[j]] M[j-1]: Output j BacktrackForSolution(M, p[j]) Else: BacktrackForSolution(M, j-1) EndIf EndIf // find the winner // j is in the soln

Running Time Time to sort by finishing time: O(n log n) Time to compute p(n): O(n 2 ) Time to fill in the M array: O(n) Time to backtrack to find solution: O(n)