CS/COE 1501

Similar documents
CS/COE 1501

Dynamic Programming Algorithms Greedy Algorithms. Lecture 29 COMP 250 Winter 2018 (Slides from M. Blanchette)

CS 206 Introduction to Computer Science II

Unit-5 Dynamic Programming 2016

The Knapsack Problem an Introduction to Dynamic Programming. Slides based on Kevin Wayne / Pearson-Addison Wesley


CS 206 Introduction to Computer Science II

Dynamic Programming. An Introduction to DP

Data Structures and Algorithms. Dynamic Programming

Lecture 12: Dynamic Programming Part 1 10:00 AM, Feb 21, 2018

Dynamic Programming. CSE 421: Intro Algorithms. Some Algorithm Design Techniques, I. Techniques, II. Outline:

For 100% Result Oriented IGNOU Coaching and Project Training Call CPD: ,

Design and Analysis of Algorithms

Dynamic programming 4/9/18

Measuring Efficiency

Design and Analysis of Algorithms 演算法設計與分析. Lecture 7 April 15, 2015 洪國寶

Dynamic Programming I

Dynamic Programming. December 15, CMPE 250 Dynamic Programming December 15, / 60

Programming Team Lecture: Dynamic Programming

The Citizen s Guide to Dynamic Programming

CSE 521: Algorithms. Dynamic Programming, I Intro: Fibonacci & Stamps. W. L. Ruzzo

CSE 421: Intro Algorithms. Winter 2012 W. L. Ruzzo Dynamic Programming, I Intro: Fibonacci & Stamps

CS 206 Introduction to Computer Science II

ECE608 - Chapter 16 answers

Recurrences and Memoization: The Fibonacci Sequence

CS141: Intermediate Data Structures and Algorithms Greedy Algorithms

CS 380 ALGORITHM DESIGN AND ANALYSIS

Lecture 22: Dynamic Programming

Greedy Algorithms. CLRS Chapters Introduction to greedy algorithms. Design of data-compression (Huffman) codes

Lecture 2. Dynammic Programming Notes.txt Page 1

Dynamic Programming Version 1.0

OVERVIEW. Recursion is an algorithmic technique where a function calls itself directly or indirectly. Why learn recursion?

Dynamic Programming. Introduction, Weighted Interval Scheduling, Knapsack. Tyler Moore. Lecture 15/16

Data Structures, Algorithms & Data Science Platforms

Homework 1. Problem 1. The following code fragment processes an array and produces two values in registers $v0 and $v1:

We ve done. Now. Next

Main approach: always make the choice that looks best at the moment.

Greedy Algorithms. Previous Examples: Huffman coding, Minimum Spanning Tree Algorithms

Dynamic Programmming: Activity Selection

Design and Analysis of Algorithms 演算法設計與分析. Lecture 7 April 6, 2016 洪國寶

Algorithm classification

Dynamic Programming Intro

Greedy Algorithms Huffman Coding

Greedy algorithms is another useful way for solving optimization problems.

CSE 421 Applications of DFS(?) Topological sort

Recursion & Dynamic Programming. Algorithm Design & Software Engineering March 17, 2016 Stefan Feuerriegel

Greedy Algorithms. Algorithms

Computer Science 385 Design and Analysis of Algorithms Siena College Spring Topic Notes: Dynamic Programming

TU/e Algorithms (2IL15) Lecture 2. Algorithms (2IL15) Lecture 2 THE GREEDY METHOD

1 Greedy algorithms and dynamic programming

Algorithmic patterns

IN101: Algorithmic techniques Vladimir-Alexandru Paun ENSTA ParisTech

# track total function calls using a global variable global fibcallcounter fibcallcounter +=1

CAD Algorithms. Categorizing Algorithms

Application of Dynamic Programming to Calculate Denominations Used in Changes Effectively

There are some situations in which recursion can be massively inefficient. For example, the standard Fibonacci recursion Fib(n) = Fib(n-1) + Fib(n-2)

A BRIEF INTRODUCTION TO DYNAMIC PROGRAMMING (DP) by Amarnath Kasibhatla Nanocad Lab University of California, Los Angeles 04/21/2010

Dynamic Programming Matrix-chain Multiplication

The power of logarithmic computations. Recursive x y. Calculate x y. Can we reduce the number of multiplications?

CS4311 Design and Analysis of Algorithms. Lecture 9: Dynamic Programming I

School of Informatics, University of Edinburgh

Recursive-Fib(n) if n=1 or n=2 then return 1 else return Recursive-Fib(n-1)+Recursive-Fib(n-2)

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING QUESTION BANK UNIT-III. SUB NAME: DESIGN AND ANALYSIS OF ALGORITHMS SEM/YEAR: III/ II PART A (2 Marks)

Lecture Notes on Dynamic Programming

Analysis of Algorithms - Greedy algorithms -

CMPS 2200 Fall Dynamic Programming. Carola Wenk. Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk

Chapter 16: Greedy Algorithm

So far... Finished looking at lower bounds and linear sorts.

Dynamic Programming. See p of the text

Dynamic Programming. Bjarki Ágúst Guðmundsson Tómas Ken Magnússon. School of Computer Science Reykjavík University

University of the Virgin Islands, St. Thomas January 15, 2015 Algorithms and Programming for High Schoolers. Lab 6

CS1 Lecture 15 Feb. 18, 2019

Chapter 17. Dynamic Programming

Dynamic Programming. Outline and Reading. Computing Fibonacci

Introduction to Algorithms

CS Algorithms. Dynamic programming and memoization. (Based on slides by Luebke, Lim, Wenbin)

EECE.3170: Microprocessor Systems Design I Summer 2017 Homework 4 Solution

Computer Sciences Department 1

Decorators in Python

Greedy Algorithms and Huffman Coding

General Purpose Methods for Combinatorial Optimization

Racket: Macros. Advanced Functional Programming. Jean-Noël Monette. November 2013

Elements of Dynamic Programming. COSC 3101A - Design and Analysis of Algorithms 8. Discovering Optimal Substructure. Optimal Substructure - Examples

Computer Algorithms CISC4080 CIS, Fordham Univ. Instructor: X. Zhang Lecture 1

Computer Algorithms CISC4080 CIS, Fordham Univ. Acknowledgement. Outline. Instructor: X. Zhang Lecture 1

Homework3: Dynamic Programming - Answers

ESO207A: Data Structures and Algorithms End-semester exam

Math 3012 Combinatorial Optimization Worksheet

Dynamic Programming. Design and Analysis of Algorithms. Entwurf und Analyse von Algorithmen. Irene Parada. Design and Analysis of Algorithms

Design and Analysis of Algorithms

CS159. Nathan Sprague. November 9, 2015

Question 2. [5 points] State a big-o upper bound on the worst case running time of the given

Chapter 15-1 : Dynamic Programming I

Design and Analysis of Algorithms CS404/504. Razvan Bunescu School of EECS.

CSCI 121: Recursive Functions & Procedures

Dynamic Programming Assembly-Line Scheduling. Greedy Algorithms

Data Structures And Algorithms

CSE 341 Lecture 5. efficiency issues; tail recursion; print Ullman ; 4.1. slides created by Marty Stepp

Agenda. Recursion Mathematical induction Rules of recursion Divide and conquer Dynamic programming Backtracking. Verification of algorithms

Algorithm Concepts. 1 Basic Algorithm Concepts. May 16, Computational Method

Transcription:

CS/COE 1501 www.cs.pitt.edu/~lipschultz/cs1501/ Greedy Algorithms and Dynamic Programming

Consider the change making problem What is the minimum number of coins needed to make up a given value k? If you were working as a cashier, what would your algorithm be to solve this problem?

This is a greedy algorithm At each step, the algorithm makes the choice that seems to be best at the moment Have we seen greedy algorithms already this term?

But wait Nearest neighbor doesn't solve travelling salesman Does not produce an optimal result Does our change making algorithm solve the change making problem? For US currency But what about a currency composed of pennies (1 cent), thrickels (3 cents), and fourters (4 cents)? What denominations would it pick for k=6?

So what changed about the problem? For greedy algorithms to produce optimal results, problems must have two properties: Optimal substructure Optimal solution to a subproblem leads to an optimal solution to the overall problem The greedy choice property Globally optimal solutions can be assembled from locally optimal choices Why is optimal substructure not enough?

Finding all subproblems solutions can be inefficient Consider computing the Fibonacci sequence: int fib(n) { if (n == 0) return 0; else if (n == 1) return 1; else { return fib(n - 1) + fib(n - 2); What does the call tree for n = 5 look like?

fib(5) fib(5) fib(4) fib(3) fib(3) fib(2) fib(2) fib(1) fib(2) fib(1) fib(1) fib(0) fib(1) fib(0) fib(1) fib(0)

How do we improve? fib(5) fib(4) fib(3) fib(3) fib(2) fib(2) fib(1) fib(2) fib(1) fib(1) fib(0) fib(1) fib(0) fib(1) fib(0)

Memoization int[] F = new int[n+1]; F[0] = 0; F[1] = 1; for(int i = 2; i <= n; i++) F[i] = -1; int dp_fib(x) { if (F[x] == -1) F[x] = dp_fib(x-1) + dp_fib(x-2); return F[x];

Note that we can also do this bottom-up int bottomup_fib(n) { if (n == 0) return 0; int[] F = new int[n+1]; F[0] = 0; F[1] = 1; for(int i = 2; i <= n; i++) { F[i] = F[i-1] + F[i-2]; return F[n]; Can we improve this bottom-up approach?

Where can we apply dynamic programming? Problems with two properties: Optimal substructure Optimal solution to a subproblem leads to an optimal solution to the overall problem Overlapping subproblems Naively, we would need to recompute the same subproblem multiple times How do these properties contrast with those for greedy algorithms?

The unbounded knapsack problem Given a knapsack that can hold a weight limit L, and a set of n types of items that each has a weight (w i ) and value (v i ), what is the maximum value we can fit in the knapsack if we assume we have unbounded copies of each item? K[0] = 0 for (l = 1; l <= L; l++) K[l] = max (v i + K[l - w i ]) w i <=l

The 0/1 knapsack problem What if we have a finite set of items, with each item having a weight and value? Two choices for each item: Goes in the knapsack Left out of the knapsack

The 0/1 knapsack problem int knapsack(int[] wt, int[] val, int L, int n) { if (n == 0 L == 0): return 0; if (wt[n-1] > L): return knapsack(wt, val, L, n-1); else: return max( val[n-1] + knapsack(wt, val, L-wt[n-1], n-1), knapsack(wt, val, L, n-1) );

The 0/1 knapsack dynamic programming solution int knapsack(int wt[], int val[], int L, int n) { int[][] K = new int[n+1][l+1]; for (int i = 0; i <= n; i++) { for (int l = 0; l <= L; l++) { if (i==0 l==0) K[i][l] = 0; else if (wt[i-1] > l) K[i][l] = K[i-1][l]; else return K[n][L]; K[i][l] = max(val[i-1] + K[i-1][l-wt[i-1]], K[i-1][l]);

The 0/1 knapsack dynamic programming example wt = [ 2, 3, 4, 5 ] val = [ 3, 4, 5, 6 ] i\l 0 1 2 3 4 5 0 0 0 0 0 0 0 1 2 3 4

The 0/1 knapsack dynamic programming example wt = [ 2, 3, 4, 5 ] val = [ 3, 4, 5, 6 ] i\l 0 1 2 3 4 5 0 0 0 0 0 0 0 1 0 0 3 3 3 3 2 3 4

The 0/1 knapsack dynamic programming example wt = [ 2, 3, 4, 5 ] val = [ 3, 4, 5, 6 ] i\l 0 1 2 3 4 5 0 0 0 0 0 0 0 1 0 0 3 3 3 3 2 0 0 3 4 4 7 3 4

The 0/1 knapsack dynamic programming example wt = [ 2, 3, 4, 5 ] val = [ 3, 4, 5, 6 ] i\l 0 1 2 3 4 5 0 0 0 0 0 0 0 1 0 0 3 3 3 3 2 0 0 3 4 4 7 3 0 0 3 4 5 7 4

The 0/1 knapsack dynamic programming example wt = [ 2, 3, 4, 5 ] val = [ 3, 4, 5, 6 ] i\l 0 1 2 3 4 5 0 0 0 0 0 0 0 1 0 0 3 3 3 3 2 0 0 3 4 4 7 3 0 0 3 4 5 7 4 0 0 3 4 5 7

The 0/1 knapsack dynamic programming solution int knapsack(int wt[], int val[], int L, int n) { int[][] K = new int[n+1][l+1]; for (int i = 0; i <= n; i++) { for (int l = 0; l <= L; l++) { if (i==0 l==0) K[i][l] = 0; else if (wt[i-1] > l) K[i][l] = K[i-1][l]; else return K[n][L]; K[i][l] = max(val[i-1] + K[i-1][l-wt[i-1]], K[i-1][l]); How can we also return the items stored in the knapsack?

To review... Questions to ask in finding dynamic programming solutions: Does the problem have optimal substructure? Can solve the problem by splitting it into smaller problems? Can you identify subproblems that build up to a solution? Does the problem have overlapping subproblems? Where would you find yourself recomputing values? How can you save and reuse these values?

The change-making problem Consider a currency with n different denominations of coins d 1, d 2,, d n. What is the minimum number of coins needed to make up a given value k?