Dynamic Programming II

Similar documents
Dynamic-Programming algorithms for shortest path problems: Bellman-Ford (for singlesource) and Floyd-Warshall (for all-pairs).

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Shortest Paths Date: 10/13/15

Chapter 6. Dynamic Programming

Longest Common Subsequence, Knapsack, Independent Set Scribe: Wilbur Yang (2016), Mary Wootters (2017) Date: November 6, 2017

Single Source Shortest Path (SSSP) Problem

Last week: Breadth-First Search

1 Dynamic Programming

Memoization/Dynamic Programming. The String reconstruction problem. CS124 Lecture 11 Spring 2018

Shortest Path Problem

Addis Ababa University, Amist Kilo July 20, 2011 Algorithms and Programming for High Schoolers. Lecture 12

Artificial Intelligence

Dijkstra s algorithm for shortest paths when no edges have negative weight.

Dynamic programming. Trivial problems are solved first More complex solutions are composed from the simpler solutions already computed

Shortest path problems

Unit-5 Dynamic Programming 2016

Algorithms for Data Science

CSE 417 Network Flows (pt 4) Min Cost Flows

Dynamic Programming. CSE 101: Design and Analysis of Algorithms Lecture 19

Dijkstra s Algorithm. Dijkstra s algorithm is a natural, greedy approach towards

1 Non greedy algorithms (which we should have covered

Chapter 9 Graph Algorithms

Graph Algorithms (part 3 of CSC 282),

Introduction to Graph Theory

Chapter 9 Graph Algorithms

Dijkstra s Algorithm Last time we saw two methods to solve the all-pairs shortest path problem: Min-plus matrix powering in O(n 3 log n) time and the

University of New Mexico Department of Computer Science. Final Examination. CS 362 Data Structures and Algorithms Spring, 2006

11/22/2016. Chapter 9 Graph Algorithms. Introduction. Definitions. Definitions. Definitions. Definitions

Single Source Shortest Path: The Bellman-Ford Algorithm. Slides based on Kevin Wayne / Pearson-Addison Wesley

Lecturers: Sanjam Garg and Prasad Raghavendra March 20, Midterm 2 Solutions

Chapter 9 Graph Algorithms

Graph Contraction. Graph Contraction CSE341T/CSE549T 10/20/2014. Lecture 14

from notes written mostly by Dr. Carla Savage: All Rights Reserved

1 Variations of the Traveling Salesman Problem

The Shortest Path Problem. The Shortest Path Problem. Mathematical Model. Integer Programming Formulation

CS 231: Algorithmic Problem Solving

Exam Sample Questions CS3212: Algorithms and Data Structures

More Dynamic Programming Floyd-Warshall Algorithm (All Pairs Shortest Path Problem)

CS490 Quiz 1. This is the written part of Quiz 1. The quiz is closed book; in particular, no notes, calculators and cell phones are allowed.

Dynamic Programming Shabsi Walfish NYU - Fundamental Algorithms Summer 2006

More Graph Algorithms: Topological Sort and Shortest Distance

1 i n (p i + r n i ) (Note that by allowing i to be n, we handle the case where the rod is not cut at all.)

Algorithms for Data Science

Lecture 6: Shortest distance, Eulerian cycles, Maxflow

Computer Science 385 Design and Analysis of Algorithms Siena College Spring Topic Notes: Dynamic Programming

Graph Algorithms (part 3 of CSC 282),

CSE 417 Branch & Bound (pt 4) Branch & Bound

Lecture #7. 1 Introduction. 2 Dijkstra s Correctness. COMPSCI 330: Design and Analysis of Algorithms 9/16/2014

Lecture 18 Solving Shortest Path Problem: Dijkstra s Algorithm. October 23, 2009

CSE 417 Network Flows (pt 3) Modeling with Min Cuts

Review of Graph Theory. Gregory Provan

Analysis of Algorithms, I

All Shortest Paths. Questions from exercises and exams

More NP-complete Problems. CS255 Chris Pollett May 3, 2006.

Solving problems on graph algorithms

Communication Networks I December 4, 2001 Agenda Graph theory notation Trees Shortest path algorithms Distributed, asynchronous algorithms Page 1

15-451/651: Design & Analysis of Algorithms January 26, 2015 Dynamic Programming I last changed: January 28, 2015

CMPSC 250 Analysis of Algorithms Spring 2018 Dr. Aravind Mohan Shortest Paths April 16, 2018

TIE Graph algorithms

1 Dijkstra s Algorithm

TA: Jade Cheng ICS 241 Recitation Lecture Note #9 October 23, 2009

Problem 1. Which of the following is true of functions =100 +log and = + log? Problem 2. Which of the following is true of functions = 2 and =3?

Ma/CS 6a Class 27: Shortest Paths

Recitation 12. Dynamic Programming Announcements. SegmentLab has been released and is due Apr 14.

The Shortest-Path Problem : G. Dantzig. 4-Shortest Paths Problems. The Shortest-Path Problem : G. Dantzig (continued)

Dynamic Programming. Lecture Overview Introduction

1 Format. 2 Topics Covered. 2.1 Minimal Spanning Trees. 2.2 Union Find. 2.3 Greedy. CS 124 Quiz 2 Review 3/25/18

Lecture Transcript While and Do While Statements in C++

Lecture 28: Graphs: Shortest Paths (Part 1)

P and NP CISC4080, Computer Algorithms CIS, Fordham Univ. Instructor: X. Zhang

Dynamic Programming: 1D Optimization. Dynamic Programming: 2D Optimization. Fibonacci Sequence. Crazy 8 s. Edit Distance

1 More on the Bellman-Ford Algorithm

Parallel Programming with MPI and OpenMP

Computer Science 385 Design and Analysis of Algorithms Siena College Spring Topic Notes: Brute-Force Algorithms

NP-Complete Problems

Recursive-Fib(n) if n=1 or n=2 then return 1 else return Recursive-Fib(n-1)+Recursive-Fib(n-2)

MA 252: Data Structures and Algorithms Lecture 36. Partha Sarathi Mandal. Dept. of Mathematics, IIT Guwahati

P and NP CISC5835, Algorithms for Big Data CIS, Fordham Univ. Instructor: X. Zhang

CSC 373: Algorithm Design and Analysis Lecture 8

CMPSCI 311: Introduction to Algorithms Practice Final Exam

Algorithms. All-Pairs Shortest Paths. Dong Kyue Kim Hanyang University

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING QUESTION BANK UNIT-III. SUB NAME: DESIGN AND ANALYSIS OF ALGORITHMS SEM/YEAR: III/ II PART A (2 Marks)

1 Dynamic Programming

Minimum Spanning Trees

PATH FINDING AND GRAPH TRAVERSAL

Department of Computer Applications. MCA 312: Design and Analysis of Algorithms. [Part I : Medium Answer Type Questions] UNIT I

All Pairs Shortest Paths

UNIT 5 GRAPH. Application of Graph Structure in real world:- Graph Terminologies:

Algorithm Design (8) Graph Algorithms 1/2

Lecture I: Shortest Path Algorithms

Algorithm Design and Analysis

Introduction to Algorithms

4 Dynamic Programming

Section 08: Solutions

COMP 182: Algorithmic Thinking Prim and Dijkstra: Efficiency and Correctness

CS420/520 Algorithm Analysis Spring 2009 Lecture 14

Redes de Computadores. Shortest Paths in Networks

Weighted Graph Algorithms Presented by Jason Yuan

Algorithms: COMP3121/3821/9101/9801

Dynamic Programming. December 15, CMPE 250 Dynamic Programming December 15, / 60

Solving Linear Recurrence Relations (8.2)

Transcription:

Lecture 11 Dynamic Programming II 11.1 Overview In this lecture we continue our discussion of dynamic programming, focusing on using it for a variety of path-finding problems in graphs. Topics in this lecture include: The Bellman-Ford algorithm for single-source (or single-sink) shortest paths. Matrix-product algorithms for all-pairs shortest paths. The Floyd-Warshall algorithm for all-pairs shortest paths. Dynamic programming for TSP. 11.2 Introduction As a reminder of basic terminology: a graph is a set of nodes or vertices, with edges between some of the nodes. We will use V to denote the set of vertices and E to denote the set of edges. If there is an edge between two vertices, we call them neighbors. The degree of a vertex is the number of neighbors it has. Unless otherwise specified, we will not allow self-loops or multi-edges (multiple edges between the same pair of nodes). As is standard with discussing graphs, we will use n = V, and m = E, and we will let V = {1,...,n}. The above describes an undirected graph. In a directed graph, each edge now has a direction (and as we said earlier, we will sometimes call the edges in a directed graph arcs). For each node, we can now talk aboutout-neighbors (andout-degree) andin-neighbors (andin-degree). Inadirected graph you may have both an edge from u to v and an edge from v to u. We will be especially interested here in weighted graphs where edges have weights (which we will also call costs or lengths). For an edge (u,v) in our graph, let s use len(u,v) to denote its weight. The basic shortest-path problem is as follows: Definition 11.1 Given a weighted, directed graph G, a start node s and a destination node t, the s-t shortest path problem is to output the shortest path from s to t. The single-source shortest 40

11.3. THE BELLMAN-FORD ALGORITHM 41 path problem is to find shortest paths from s to every node in G. The (algorithmically equivalent) single-sink shortest path problem is to find shortest paths from every node in G to t. We will allow for negative-weight edges (we ll later see some problems where this comes up when usingshortest-path algorithms as asubroutine) butwill assume nonegative-weight cycles (else the shortest path can wrap around such a cycle infinitely often and has length negative infinity). As a shorthand in our drawings, if there is an edge of length l from u to v and also an edge of length l from v to u, we will often just draw them together as a single undirected edge. So, all such edges must have positive weight. 11.3 The Bellman-Ford Algorithm We will now look at a Dynamic Programming algorithm called the Bellman-Ford Algorithm for the single-sink (or single-source) shortest path problem. We will assume the graph is represented as an adjacency list: an array of size n where entry v is the list of arcs exiting from node v (so, given a node, we can find all its neighbors in time proportional to the number of neighbors). Let us develop the algorithm using the following example: t 60 30 0 1 2 40 40 10 3 15 30 4 5 How can we use Dyanamic Programming to find the shortest path from all nodes to t? First of all, as usual for Dynamic Programming, let s just compute the lengths of the shortest paths first, and afterwards we can easily reconstruct the paths themselves. The idea for the algorithm is as follows: 1. For each node v, find the length of the shortest path to t that uses at most 1 edge, or write down if there is no such path. This is easy: if v = t we get 0; if (v,t) E then we get len(v,t); else just put down. 2. Now, suppose for all v we have solved for length of the shortest path to t that uses i 1 or fewer edges. How can we use this to solve for the shortest path that uses i or fewer edges? Answer: the shortest path from v to t that uses i or fewer edges will first go to some neighbor x of v, and then take the shortest path from x to t that uses i 1 or fewer edges, which we ve already solved for! So, we just need to take the min over all neighbors x of v. 3. How far do we need to go? Answer: at most i = n 1 edges. Specifically, here is pseudocode for the algorithm. We will use d[v][i] to denote the length of the shortest path from v to t that uses i or fewer edges (if it exists) and infinity otherwise ( d for distance ). Also, for convenience we will use a base case of i = 0 rather than i = 1. Bellman-Ford pseudocode: initialize d[v][0] = infinity for v!= t. d[t][i]=0 for all i.

11.4. ALL-PAIRS SHORTEST PATHS 42 for i=1 to n-1: for each v!= t: d[v][i] = min (len(v,x) + d[x][i-1]) (v,x) E For each v, output d[v][n-1]. Try it on the above graph! We already argued for correctness of the algorithm. What about running time? The min operation takes time proportional to the out-degree of v. So, the inner for-loop takes time proportional to the sum of the out-degrees of all the nodes, which is O(m). Therefore, the total time is O(mn). So far we have only calculated the lengths of the shortest paths; how can we reconstruct the paths themselves? One easy way is (as usual for DP) to work backwards: if you re at vertex v at distance d[v] from t, move to the neighbor x such that d[v] = d[x] + len(v,x). This allows us to reconstruct the path in time O(m + n) which is just a low-order term in the overall running time. 11.4 All-pairs Shortest Paths Say we want to compute the length of the shortest path between every pair of vertices. This is called the all-pairs shortest path problem. If we use Bellman-Ford for all n possible destinations t, this would take time O(mn 2 ). We will now see two alternative Dynamic-Programming algorithms for this problem: the first uses the matrix representation of graphs and runs in time O(n 3 log n); the second, called the Floyd-Warshall algorithm uses a different way of breaking into subproblems and runs in time O(n 3 ). 11.4.1 All-pairs Shortest Paths via Matrix Products Given a weighted graph G, define the matrix A = A(G) as follows: A[i,i] = 0 for all i. If there is an edge from i to j, then A[i,j] = len(i,j). Otherwise (i j and there is no edge from i to j), A[i,j] =. I.e., A[i,j] is the length of the shortest path from i to j using 1 or fewer edges. 1 Now, following the basic Dynamic Programming idea, can we use this to produce a new matrix B where B[i,j] is the length of the shortest path from i to j using 2 or fewer edges? Answer: yes. B[i,j] = min k (A[i,k] + A[k,j]). Think about why this is true! I.e., what we want to do is compute a matrix product B = A A except we change * to + and we change + to min in the definition. In other words, instead of computing the sum of products, we compute the min of sums. 1 There are multiple ways to define an adjacency matrix for weighted graphs e.g., what A[i, i] should be and what A[i, j] should be if there is no edge from i to j. The right definition will typically depend on the problem you want to solve.

11.5. TSP 43 What if we now want to get the shortest paths that use 4 or fewer edges? To do this, we just need to compute C = B B (using our new definition of matrix product). I.e., to get from i to j using 4 or fewer edges, we need to go from i to some intermediate node k using 2 or fewer edges, and then from k to j using 2 or fewer edges. So, to solve for all-pairs shortest paths we just need to keep squaring O(log n) times. Each matrix multiplication takes time O(n 3 ) so the overall running time is O(n 3 log n). 11.4.2 All-pairs shortest paths via Floyd-Warshall Here is an algorithm that shaves off the O(log n) and runs in time O(n 3 ). The idea is that instead of increasing the number of edges in the path, we ll increase the set of vertices we allow as intermediate nodes in the path. In other words, starting from the same base case (the shortest path that uses no intermediate nodes), we ll then go on to considering the shortest path that s allowed to use node 1 as an intermediate node, the shortest path that s allowed to use {1,2} as intermediate nodes, and so on. // After each iteration of the outside loop, A[i][j] = length of the // shortest i->j path that s allowed to use vertices in the set 1..k for k = 1 to n do: for each i,j do: A[i][j] = min( A[i][j], (A[i][k] + A[k][j]); I.e., you either go through node k or you don t. The total time for this algorithm is O(n 3 ). What s amazing here is how compact and simple the code is! 11.5 TSP The NP-hard Traveling Salesman Problem (TSP) asks to find the shortest route that visits all vertices in the graph. To be precise, the TSP is the shortest tour that visits all vertices and returns back to the start. 2 Since the problem is NP-hard, we don t expect that Dynamic Programming will give us a polynomial-time algorithm, but perhaps it can still help. Specifically, the naive algorithm for the TSP is just to run brute-force over all n! permutations of the n vertices and to compute the cost of each, choosing the shortest. (We can reduce this to (n 1)! permutations by always using the same start vertex, but we still pay Θ(n) to compute the cost of each permutation, so the overall running time is O(n!).) We re going to use Dynamic Programming to reduce this to only O(n 2 2 n ). Any ideas? As usual, let s first just worry about computing the cost of the optimal solution, and then we ll later be able to add in some hooks to recover the path. Also, let s work with the shortestpath metric where we ve already computed all-pairs-shortest paths (so we can view our graph as a complete graph with weights between any two vertices representing the shortest path between them). This is conveninent since it means a solution is really just a permutation. Finally, let s fix some start vertex s. 2 Note that under this definition, it doesn t matter which vertex we select as the start. The Traveling Salesman Path Problem is the same thing but does not require returning to the start. Both problems are NP-hard.

11.5. TSP 44 Now, here is one fact we can use. Suppose someone told you what the initial part of the solution should look like and we want to use this to figure out the rest. Then really all we need to know about it for the purpose of completing it into a tour is the set of vertices visited in this initial segment and the last vertex t visited in the set. We don t really need the whole ordering of the initial segment. This means there are only n2 n subproblems (one for every set of vertices and ending vertex t in the set). Furthermore, we can compute the optimal solution to a subproblem in time O(n) given solutions to smaller subproblems (just look at all possible vertices t in the set we could have been at right before going to t and take the one that minimizes the cost so far (stored in our lookup table) plus the distance from t to t). Here is a top-down way of thinking about it: if we were writing a recursive piece of code to solve this problem, then we would have a bit-vector saying which vertices have been visited so far and a variable saying where we are now, and then we would mark the current location as visited and recursively call on all possible (i.e., not yet visited) next places we might go to. Naively this would take time Ω(n!). However, by storing the results of our computations we can use the fact that there are only n2 n possible (bit-vector, current-location) pairs, and so only that many different recursive calls made. For each recursive call we do O(n) work inside the call, for a total of O(n 2 2 n ) time. 3 The last thing is we just need to recompute the paths, but this is easy to do from the computations stored in the same way as we did for shortest paths. 3 For more, see http://xkcd.com/399/