Data Structures (CS 1520) Lecture 28 Name:

Similar documents
Data Structures (CS 1520) Lecture 28 Name:

Coping with the Limitations of Algorithm Power Exact Solution Strategies Backtracking Backtracking : A Scenario

CSE 417 Branch & Bound (pt 4) Branch & Bound

6. Algorithm Design Techniques

CS 231: Algorithmic Problem Solving

Backtracking. Chapter 5

Design and Analysis of Algorithms

Backtracking and Branch-and-Bound

CS 440 Theory of Algorithms /

UNIT 4 Branch and Bound

Subset sum problem and dynamic programming

COMP 355 Advanced Algorithms Approximation Algorithms: VC and TSP Chapter 11 (KT) Section (CLRS)

Department of Computer Applications. MCA 312: Design and Analysis of Algorithms. [Part I : Medium Answer Type Questions] UNIT I

BackTracking Introduction

Spanning Tree. Lecture19: Graph III. Minimum Spanning Tree (MSP)

Algorithm Design and Analysis

Monte Carlo Methods; Combinatorial Search

CMSC 451: Lecture 22 Approximation Algorithms: Vertex Cover and TSP Tuesday, Dec 5, 2017

Lecture 8: The Traveling Salesman Problem

CSCE 350: Chin-Tser Huang. University of South Carolina

Chapter 9 Graph Algorithms

1 The Traveling Salesman Problem

CS6402 DESIGN AND ANALYSIS OF ALGORITHMS QUESTION BANK UNIT I

Lecture 13. Reading: Weiss, Ch. 9, Ch 8 CSE 100, UCSD: LEC 13. Page 1 of 29

Combinatorial Search; Monte Carlo Methods

3 INTEGER LINEAR PROGRAMMING

Assignment No 2 (Group B)

Backtracking is a refinement of the brute force approach, which systematically searches for a

Chapter 9 Graph Algorithms

Motivation: Art gallery problem. Polygon decomposition. Art gallery problem: upper bound. Art gallery problem: lower bound

CS270 Combinatorial Algorithms & Data Structures Spring Lecture 19:

Theorem 2.9: nearest addition algorithm

Approximation Algorithms

Algorithm Design Techniques. Hwansoo Han

Approximation Algorithms

(Refer Slide Time: 01:00)

Notes for Lecture 24

Algorithm Design Techniques (III)

CSE 100: B+ TREE, 2-3 TREE, NP- COMPLETNESS

Chapter 6 Heapsort 1

CMSC Introduction to Algorithms Spring 2012 Lecture 16

Algorithm classification

11. APPROXIMATION ALGORITHMS

Exact Algorithms for NP-hard problems

Partha Sarathi Mandal

NP-Hard (A) (B) (C) (D) 3 n 2 n TSP-Min any Instance V, E Question: Hamiltonian Cycle TSP V, n 22 n E u, v V H

Autumn Minimum Spanning Tree Disjoint Union / Find Traveling Salesman Problem

35 Approximation Algorithms

Chapter-6 Backtracking

Orthogonal range searching. Orthogonal range search

MITOCW watch?v=zm5mw5nkzjg

Contents. 1 Introduction. 2 Searching and Traversal Techniques. Preface... (vii) Acknowledgements... (ix)

Structures and Strategies For Space State Search

CAD Algorithms. Categorizing Algorithms

15-451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015

PATH FINDING AND GRAPH TRAVERSAL

Greedy Algorithms 1. For large values of d, brute force search is not feasible because there are 2 d

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018

Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology. Assignment

5105 BHARATHIDASAN ENGINEERING COLLEGE

Algorithmic problem-solving: Lecture 2. Algorithmic problem-solving: Tractable vs Intractable problems. Based on Part V of the course textbook.

Coping with NP-Completeness

Geometric data structures:

11.1 Facility Location

Algorithmic patterns

High Dimensional Indexing by Clustering

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17

CLASS: II YEAR / IV SEMESTER CSE CS 6402-DESIGN AND ANALYSIS OF ALGORITHM UNIT I INTRODUCTION

Searching with Partial Information

PSD1A. DESIGN AND ANALYSIS OF ALGORITHMS Unit : I-V

1 The Traveling Salesman Problem

University of New Mexico Department of Computer Science. Final Examination. CS 362 Data Structures and Algorithms Spring, 2006

Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, Approximation Algorithms

Data Structure: Search Trees 2. Instructor: Prof. Young-guk Ha Dept. of Computer Science & Engineering

P and NP CISC4080, Computer Algorithms CIS, Fordham Univ. Instructor: X. Zhang

Parallel and Sequential Data Structures and Algorithms Lecture (Spring 2012) Lecture 16 Treaps; Augmented BSTs

Algorithm Design Methods. Some Methods Not Covered

11/22/2016. Chapter 9 Graph Algorithms. Introduction. Definitions. Definitions. Definitions. Definitions

Informed search algorithms

CPSC W2: Quiz 2 Sample Solutions

Chapter 9 Graph Algorithms

ITCS 6150 Intelligent Systems. Lecture 5 Informed Searches

4.1 Interval Scheduling

Optimal tour along pubs in the UK

UCSD CSE 21, Spring 2014 [Section B00] Mathematics for Algorithm and System Analysis

Aperiodic Task Scheduling

Chapter 3, Algorithms Algorithms

Practice Final Exam 1

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem

P and NP CISC5835, Algorithms for Big Data CIS, Fordham Univ. Instructor: X. Zhang

Institute of Operating Systems and Computer Networks Algorithms Group. Network Algorithms. Tutorial 4: Matching and other stuff

NP Completeness. Andreas Klappenecker [partially based on slides by Jennifer Welch]

CS 771 Artificial Intelligence. Informed Search

CS 206 Introduction to Computer Science II

An algorithm is a sequence of instructions that one must perform in order to solve a wellformulated

Design and Analysis of Algorithms

Lecturer: Shuchi Chawla Topic: Euclidean TSP (contd.) Date: 2/8/07

Priority Queues and Binary Heaps

Greedy Algorithms. Alexandra Stefan

Final. Name: TA: Section Time: Course Login: Person on Left: Person on Right: U.C. Berkeley CS170 : Algorithms, Fall 2013

Transcription:

Traeling Salesperson Problem (TSP) -- Find an optimal (ie, minimum length) when at least one exists A (or Hamiltonian circuit) is a path from a ertex back to itself that passes through each of the other ertices exactly once (Since a isits eery ertice, it does not matter where you start, so we will generally start at ) What are the length of the following? a) [,,,,, ] ++++ = b) List another starting at and its length [,,,,, ] ++9++ = (others possible too) c) For a graph with "n" ertices (,,,, n- ), one possible approach to soling TSP would be to brute-force generate all possible s to find the minimum length "Complete" the following decision tree to determine the number of possible s n - n - n - n - n - (n-) nodes at this leel n - (n-)*(n-) (n-)*(n-)*(n-) nodes nodes at this at this leel leel Unfortunately, TSP is an NP-hard problem, ie, no known polynomial-time algorithm (n-)! branches in whole brute-force decision tree O( (n-)! ) is ery bad!!! Lecture Page

Handling "Hard" Problems: For many optimization problems (eg, TSP, knapsack, job-scheduling), the best known algorithms hae run-time's that grow exponentially (O( n ) or worse) Thus, you could wait centuries for the solution of all but the smallest problems! Ways to handle these "hard" problems: Find the best (or a good) solution "quickly" to aoid considering the ast majority of the n worse solutions, eg, Backtracking (section ) and Best-first-search-branch-and-bound See if a restricted ersion of the problem meets your needed that might hae a tractable (polynomial, eg, Ο(n )) solution eg, TSP problem satisfying the triangle inequality, Fractional Knapsack problem Use an approximation algorithm to find a good, but not necessarily optimal solution Backtracking general idea: (Recall the coin-change problem from lectures and ) Search the "state-space tree" using depth-first search to find a suboptimal solution quickly Use the best solution found so far to prune solutions that are not "promising,", ie, cannot lead to a better solution than one already found The goal is to prune enough of the state-space tree (exponential is size) that the optimal solution can be found in a reasonable amount of time Howeer, in the worst case, the algorithm is still exponential My simple backtracking solution for the coin-change problem without pruning: def recmc(change, coinvaluelist): global backtrackingnodes backtrackingnodes += mincoins = change if change in coinvaluelist: return else: for i in coinvaluelist: if i <= change: numcoins = + recmc(change - i, coinvaluelist) if numcoins < mincoins: mincoins = numcoins return mincoins Results of running this code: Change Amount: Coin types: [,,, ] Run-time: seconds Fewest number of coins Number of Backtracking Nodes: 7,7,9 Consider the output of running the backtracking code with pruning twice with a change amount of cents Change Amount: Coin types: [,,, ] Run-time: seconds Fewest number of coins The number of each type of coins is: number of -cent coins is number of -cent coins is number of -cent coins is number of -cent coins is Number of Backtracking Nodes: Change Amount: Coin types: [,,, ] Run-time: seconds Fewest number of coins The number of each type of coins is: number of -cent coins is number of -cent coins is number of -cent coins is number of -cent coins is Number of Backtracking Nodes: a) With the coin types sorted in ascending order what is the first solution found? pennies/coins b) How useful is the solution found in (a) for pruning? Not useful at all! c) With the coin types sorted in descending order what is the first solution found? coin solution which also is optimal d) How useful is the solution found in (c) for pruning? Very useful since we neer need to pursue a branch of the tree past coins Lecture Page

e) For the coin-change problem, backtracking is not the best problem-soling technique What technique was better? Dynamic-programming is much better a) For the TSP problem, why is backtracking the best problem-soling technique? Een the dynamic-programming solutions are O( n ) where n is the number of cities b) To prune a node in the search-tree, we need to be certain that it cannot lead to the best solution How can we calculate a bound on the best solution possible from a node (eg, say node with : [,, ])? [ ] [, ] [, ] [, ] 7 [,, ] [,,,,, ] best so far = To complete the from the path [,, ] each node,, must be left going someplace reasonable For node going someplace reasonable is either or It cannot go to yet since the must include all nodes before going back to For node going someplace reasonable is either or With being an option if is the last node on the before going back to For node going someplace reasonable is either or With being an option if is the last node on the before going back to A bound on the best we could do to complete the would be to sum the minimum edges leaing each of,, going someplace reasonable min(,9) min(,) min(,7) path leaing leaing leaing length 7 + + + 7 = Lecture Page

To prune a node in the search-tree, we need to be certain that it cannot lead to the best solution We can calculate a bound on the best solution possible from a node (eg, say node with : [,, ]) by summing the with the minimum edges leaing the remaining nodes going some place reasonable Complete the backtracking state-space tree with pruning [, ] + + + + Min edge weight leaing each [ ] + + + + 7 + + + + + 7 [, ] nd Min edge weight leaing each [, ] [, ] + 7 + + + prune > 9 + + + 7 Min edge leaing each 9 [,, ] [,, ] [,, ] [,, ] [,, Min edge leaing each ] [,, ] 7 + + + 9 + + + st Min edge leaing each Min edge leaing each Min edge weight leaing each + + + + + 9 + prune > prune > [,,,,, ] [,,,,, ] [,,,,, ] [,,,,, ] [,,,,, ] [,,,,, ] best so far best so far Min edge weight leaing each Min edge weight leaing each Lecture Page

In the Best-First search with Branch-and-Bound approach: does not limit us to any particular search pattern in the state-space tree calculates a "bound" estimate for each node that indicates the "best" possible solution that could be obtained from any node in the subtree rooted at that node, ie, how "promising" following that node might be expands the most promising ( best ) node first by isiting its children a) What type of data structure would we use to find the most promising node to expand next? min heap b) Complete the best-first search with branch-and-bound state-space tree with pruning Indicate the order of nodes expanded Min edge weight leaing each + + + + st [ ] Min edge weight leaing each + + + + 7 + [, ] [, ] [, ] + 7 + + + [,, ] [,, ] Min edge leaing each + + + + + + + 7 9 [,, ] 9 + + + [,,,,, ] [,,,,, ] best so far [,,,,, ] [,,,,, ] Initially best so far Lecture Page