Iterative deepening multiobjective A*

Size: px
Start display at page:

Download "Iterative deepening multiobjective A*"

Transcription

1 Iterative deepening multiobjective A* S. Harikumar, Shashi Kumar * Depurtment of Computer Science and Engineering, Indiun Institute of Technology, Delhi, New Delhi, India Received 19 January 1995; revised 17 January 1996 Communicated by M.J. Attalah Keywords: Algorithms; Combinatorial problems; Multiobjective search; Optimization; Analysis of algorithms 1. Introduction Many real-world optimization problems involve multiple objectives which are often conflicting. When conventional heuristic search algorithms such as A* and IDA* are used for solving such problems, then these problems have to be modeled as simple cost minimization or maximization problems. The task of modeling such problems using a single valued criterion has often proved difficult [6]. The problems involved in accurately and confidently determining a scalar valued criterion on which to base the selection of a most preferred alternative have led to the development of the multiobjective approach to alternative selection [7]. In [7], Stewart and White have presented a multiobjective generalization of the popular A* algorithm, the MOA*, which uses heuristic based best first search techniques to generate all nondominated solutions. Like A*, MOA* also has exponential space complexity. Depth first search techniques use linear space, but take too much time and do not lead to an admissible algorithm. A depth first version of A* called iterative deepening A* (IDA* ) [2] takes linear space and is shown * Corresponding author. to be admissible. Under certain conditions it has been shown that IDA* is optimal [4]. That is, the number of nodes expanded by IDA* is of the same order as A*. In this paper, we present a linear space multiobjective generalization of the iterative deepening algorithm called IDMOA* and prove its correctness, completeness and optimality with respect to MOA*. We have assumed that all objectives are formulated in terms of some quantity to be minimized. Maximization problems can be handled with simple modifications. We define dominance in the same way as in [7]. Definition (Dominance). Given a set Y of m-vectors, we define a relation R C Y X Y, as follows: (Y. Y ) ER * y,<yiviel,..., mand y#y. We say that y dominates y (in Y) if ( y, y ) E R. An element y E Y is said to be non-dominated (in Y) if there does not exist another element y E Y such that y dominates y (in Y ). We illustrate the concept of dominance in the context of the multiobjective traveling salesperson (TSP) problem in which we have to find the tours having non-dominated costs, given a city graph with each edge having two weights as shown in Fig. 1.

2 12 S. Harikumar, S. Kumar/Injbrmation Processing Letters 58 (1996) I I-15 Fig. I. Multiobjective TSP. The non-dominated tours are: (1) A + G + B -+ C --) D + E --) F: Cost = (15, IO). (2) A + G + B + C + F + E + D: Cost = (18, 9). (3) A + G --) F --) E + D --) C + B: Cost = (14, 12). The following notation will be used to describe the algorithm: m: The number of objectives. SOLUTION: The current set of all non-dominated solutions. Threshold,: Maximum value of threshold used for the ith objective. Si: The set of non-dominated solutions obtained after fixing threshold for the ith objective. s(w,, wz,..., w,): Cost vector associated with node s in the search graph or solution s. s(i): Cost component of node s corresponding to the ith objective. The basic idea As given in [2], IDA* proceeds by adjusting a threshold value and doing a depth first search till the evaluation function of the nodes exceed the threshold. The threshold is modified (increased) till we get a solution. In the multiobjective scenario, since we have multiple objectives, we need to have a threshold vector instead of a scalar value. If we have m objectives, the threshold vector may be represented by (t,, r,,... f,,,), where ci is the threshold for the ith objective during an iteration. We illustrate the basic idea of the algorithm by considering a problem with two objectives. In the IDMOA* algorithm presented here, we first apply a threshold for the first objectives, and using irerutive deepening [2], get a non-dominated solution which has the least value for the first objective. We use the value of the second objective for this solution as the maximum threshold for that objective and find more non-dominated solutions using iterative deepening search. This can be generalized for problems with more than two objectives. After applying a threshold for the ith objective (i > 1) we use max(s(i + 1); Vs E SOLUTION) as the maximum threshold for the (i + 1)t.h objective. We use this new maximum threshold to do iterative deepening search and generate more solutions. Like IDA* [2], IDMOA* also requires the evaluation functions to be monotone. The algorithm terminates when all the objectives have been considered. SO- LUTION now contains all the non-dominated solutions. 3. The IDMOA* algorithm Input: A graph G = (V, E) with vertices u E V and edge weights {(wl,, wzl...., w,,>, (w,~, w22..., w,,,~),...i. For each edge e E E, there corresponds an m-tuple ( w, e, wze,..., w,,) representing the selection values for the m objectives. Output: The set of non-dominated solutions namely SOLUTION. Global ObjIndex, MaxThreshold, Threshold, SOLU- TION. 0. Initialize by setting SOLUTION equal to the empty set and SNODE to the start node. 1. Find all solutions with the minimum value for the first objective Set Objlndex equal to 1 and Threshold equal to the heuristic estimate of the minimum threshold for the first objective If SOLUTION is empty, do the following: Perform DFSEARCH (SNODE) to find SO~Utions Heuristically increment Threshold by the minimum step considering the first objective.

3 S. Harikwnnr, S. Kumar/lnformation Processing Letters ) II Go to Step (1.2) Otherwise, continue. 2. Fix thresholds for other objectives and find all solutions using iterative deepening For ObjIndex := 2 to m do the following: Set MaxThreshold equal to max(s(objlndex); Vs E SOLUTION) Find solutions using iterative deepening search Set Threshold equal to the heuristic estimate of the minimum threshold for the current objective (Objlndex) If Threshold < MaxThreshold then do the following: Perform DFSEARCH (SNODE) to find solutions. Heuristically increment Threshold by the minimum step considering the current objective Go to Step ( ) Otherwise, continue. 3. stop. The DFSEARCHO routine does an exhaustive search to find out all the non-dominated solutions which satisfy the threshold criterion. Each solution is added to SOLUTION and dominated solutions are removed from SOLUTION. DFSEARCHC n) 1. Check for valid node If n(objzndex) is greater than Threshold go to Step (4) Otherwise, continue. 2. Identify solutions If n is a goal node and n(objindex> is not greater than Threshold, then do the following: Add it to SOLUTION Remove any dominated members of SOLU- TION Go to Step (4) 2.2. Otherwise, continue. 3. Expand n and examine its successors If n has no successors, go to Step (4) For all successors n of n, do the following: If n (ObjIndex) is not greater than Threshold and n is not dominated by any solution in SOLUTION, then do the following: l. Recursively do DFSEARCH (n ). 4. Return. 4. Properties of IDMOA* In the following we show that IDMOA* inherits the properties of IDA* and MOA*. We discuss the completeness of IDMOA* and its optimality with respect to MOA. Lemma 1. Si+, a Si; Vi: 1 d i < m. Proof. We have fixed Threshold,,, = max(s(i + 1); Vs E S,), i.e. Vs E Si, s(i + 1) Q Threshold,. 1. Also, for any new solution t found in the (i + 1)th step, t(i) > Threshold,. But, Vs E Si, s(i) Q Threshold,. We observe that t(i) > s(i) and so t cannot dominate s. Therefore, all solutions in Si continue to be non-dominated when new solutions are added in the (i + 1)th step. From the algorithm, Si+ 1 = (s i s(i + 1) < Threshold,,, and s is non-dominated}. Therefore, Vs, SESi * s=si+,. (1) We now prove that there can exist a non-dominated solution which is not in Si but in Si+,. Consider the case when there exists a non-dominated solution s and an integer k such that s (k) > s(k) Vs E Si, 1 < kg i and s (i + 1) < min(s(i + 1); Vs E Si>. Clearly, s 4Si, but S ES~+,, since s ( i + 1) d Threshold,,,. From (1) and (2), S,,, zsi; Vi: 1 gi<m. 0 Definition (Non-dominated node). A node in the search graph is said to be non-dominated if the cost vector associated with it is not dominated by that of any non-dominated solution. Lemma 2. All the non-dominated nodes will be expanded by IDMOA*. (2)

4 14 S. Harikumar. S. Kumar/lnformation Processing Letters ) II-15 Proof. We prove this by contradiction. Let us assume that IDMOA* does not expand a node n which is not dominated by any solution in S,,,. Since n was not expanded, it follows from the algorithm that n (i) > Threshold, Vi: 1 < i Q m. (3) Now consider the first solution detected by IDMOA*, say si, i.e. SEES,. Since s,es,, sf~sk Vk: 1 <k Q m according to Lemma 1. Therefore, sr( i) < Threshold, Vi: 1 < i Q m. (4) Due to facts (3) and (4), and by the definition of dominance, n is dominated by sr. This contradicts our assumption that n is a non-dominated node. So IDMOA expands all the non-dominated nodes. 0 Now we prove that IDMOA is complete, meaning that it finds all the non-dominated solutions. Theorem 3. IDMOA is complete. Proof. Let us assume that IDMOA* is incomplete, and it has missed a solution s which is not dominated by any solution in S,,,, S es,. But by Lemma 2, since IDMOA* expands all the non-dominated nodes, s will be considered for expansion by IDMOA*. Then it follows from the algorithm that s will be identified as a solution during DFSEARCHO. This contradicts our assumption that IDMOA* has missed the non-dominated solution s. Therefore IDMOA is complete. 0 Now we see the optimality of IDMOA with respect to MOA* by considering the nodes expanded (like in the case of IDA* [3]) by IDMOA* and MOA". The asymptotic optimality of IDMOA will still be subject to the optimality conditions for IDA* described in [4]. Theorem 4. IDMOA* does not expand a node which is not expanded by MOA*. Proof. By Lemma 17 in [7], we know that MOA* expands all the non-dominated nodes, and only the non-dominated nodes. Let us assume that IDMOA* expands a node n which is not expanded by MOA*. So n is dominated by some solution. If n(w,, w2,... ) w,> is a dominated node, then there should exist a solution s( w,, w;,..., wk> which dominates n. By the definition of dominance, Vi: l,<igm, w:,<wi and 3j: l<j<m, w;<wj. Since s is a non-dominated solution, according to Lemma 2, 3 i: 1 < i < m such that s(i) < Threshold,.. Let j be the first such objective (in the order in which objectives are considered by the algorithm) for which s( j> Q Thresholdj. Consider the iterative deepening search phase in which threshold for objective j is fixed. We have two possibilities with s dominating n. Case 1: n(j) > s(j). Clearly, s will be encountered before n in iterative deepening. Then s, being a solution, will get added to SOLUTION. Since n is dominated by s, n will not be expanded in further iterations. Case 2: n(j) = s(j). Consider the first iteration during iterative deepening in which s is encountered. In this case, both s and n are encountered in the same invocation of DFSEARCHO, though either of them can be encountered first. n will not be expanded in the same invocation of DFSEARCHO since any child of n will have evaluation function greater than the threshold used in that iteration, as the evaluation function is monotone and threshold is incremented by the minimum steps. s gets added to SOLUTION since it is a non-dominated solution. Since n is dominated by s, it is not considered for expansion in the next iteration. Therefore, we contradict our assumption that a dominated node n is expanded by IDMOA. 0 Because of the above result, the following corollary can be stated. Corollary 5. The order in which IDMOA* considers the objectives does not affect the set of nodes expanded. Proof. From Lemma 2, Theorem 4 and Lemma 17 of [7] we see that both IDMOA and MOA expand the same set of nodes. Since Lemma 2 and Theorem

5 S. Harikumar, S. Kumar / Information Processing Leners 58 (1996) I I hold irrespective of the order in which the objectives are considered, we conclude that the set of nodes expanded by IDMOA* is not affected by the order in which it considers the objectives Implementation and performance The IDMOA algorithm has been tested on search problems like multiobjective TSP. In practice IDMOA is found to be more efficient than MOA* in terms of actual time. This is basically due to two reasons: (1) MOA finds the non-dominated set from OPEN [7], in each iteration. Since OPEN is a large set (growing exponentially), finding the nondominated set is time consuming. IDMOA* tests whether a node is dominated by any of the solutions in SOLUTION before expanding it. Since SOLU- TION is much smaller than OPEN, IDMOA* proceeds faster. (2) The overheads associated with IDMOA* are much less compared to MOA* where we have to maintain many sets. 6. Conclusion Multiobjective search is a general problem solving technique for problems in which there are multi- ple objectives. We have presented an extension of MOA called iterative deepening multiobjective A* (IDMOA* > to tackle the same class of problems. We have shown the correctness and completeness of the algorithm. Since IDMOA* uses only linear space, it can be a good candidate for the general class of multiobjective search problems. Finding out general approximate versions of IDMOA* will be an interesting future work. References t11 A.V. Aho, J.E. Hopcroft and J.D. Ullman. Data Structures and Algorithms (Addison-Wesley, Reading, MA, 1983). El R.E. Korf, Depth first iterative deepening: An optimal admissible tree search, Artificial Intelligence 27 (1985) [31 R.E. Korf, Optimal path-finding algorithms, in: L. Kanal and V. Kumar, eds., Search in Artificial Intelligence (Springer, Berlin, 1988) t41 A. Mahanti, S. Ghosh, D.S. Nau, A.K. Pal and L. Kanal, Performance of IDA on trees and graphs, in: Proc. AAAI-92 (1992) [51 N.J. Nilsson, Principles of Artijicial Intelligence (Tioga, Palo Alto, CA, 1980). l61 R.K. Singh, S. Kumar and V. Arvind, A heuristic search strategy for optimization of trade-off cost measures, in: Proc IEEE Conf on Tools Internat. for AI, San Jose, CA (1991). 171 B.S. Stewart and Chelsea C. White, Multiobjective A, J. ACM 38 (4) (1991)

Improving the Efficiency of Depth-First Search by Cycle Elimination

Improving the Efficiency of Depth-First Search by Cycle Elimination Improving the Efficiency of Depth-First Search by Cycle Elimination John F. Dillenburg and Peter C. Nelson * Department of Electrical Engineering and Computer Science (M/C 154) University of Illinois Chicago,

More information

Efficient memory-bounded search methods

Efficient memory-bounded search methods Efficient memory-bounded search methods Mikhail Simin Arjang Fahim CSCE 580: Artificial Intelligence Fall 2011 Dr. Marco Voltorta Outline of The Presentation Motivations and Objectives Background - BFS

More information

Ideal Point Guided Iterative Deepening

Ideal Point Guided Iterative Deepening 246 ECAI 2012 Luc De Raedt et al. (Eds.) 2012 The Author(s). This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial

More information

Heuristic Search. Dana S. Nau. Institute for Systems Research USA. Telephone: (301)

Heuristic Search. Dana S. Nau. Institute for Systems Research USA. Telephone: (301) Improving the Eciency of Limited-Memory Heuristic Search Subrata Ghosh Department of Computer Science University of Maryland College Park, MD 20742 USA Telephone: (30) 405-277 Fax: (30) 405-6707 subrata@cs.umd.edu

More information

On the Asymptotic Performance of IDA* Address all correspondence to Dana S. Nau. Abstract

On the Asymptotic Performance of IDA* Address all correspondence to Dana S. Nau. Abstract On the Asymptotic Performance of IDA* A. Mahanti S. Ghosh y D. S. Nau z A. K. Pal x L.N.Kanal { Address all correspondence to Dana S. Nau Abstract Since best-rst search algorithms such as A* require large

More information

Heuristic Search in Cyclic AND/OR Graphs

Heuristic Search in Cyclic AND/OR Graphs From: AAAI-98 Proceedings. Copyright 1998, AAAI (www.aaai.org). All rights reserved. Heuristic Search in Cyclic AND/OR Graphs Eric A. Hansen and Shlomo Zilberstein Computer Science Department University

More information

Today s s lecture. Lecture 3: Search - 2. Problem Solving by Search. Agent vs. Conventional AI View. Victor R. Lesser. CMPSCI 683 Fall 2004

Today s s lecture. Lecture 3: Search - 2. Problem Solving by Search. Agent vs. Conventional AI View. Victor R. Lesser. CMPSCI 683 Fall 2004 Today s s lecture Search and Agents Material at the end of last lecture Lecture 3: Search - 2 Victor R. Lesser CMPSCI 683 Fall 2004 Continuation of Simple Search The use of background knowledge to accelerate

More information

Informed search methods

Informed search methods Informed search methods Tuomas Sandholm Computer Science Department Carnegie Mellon University Read Section 3.5-3.7 of Russell and Norvig Informed Search Methods Heuristic = to find, to discover Heuristic

More information

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 23 January, 2018

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 23 January, 2018 DIT411/TIN175, Artificial Intelligence Chapters 3 4: More search algorithms CHAPTERS 3 4: MORE SEARCH ALGORITHMS DIT411/TIN175, Artificial Intelligence Peter Ljunglöf 23 January, 2018 1 TABLE OF CONTENTS

More information

1 Introduction. 2 Iterative-Deepening A* 3 Test domains

1 Introduction. 2 Iterative-Deepening A* 3 Test domains From: AAAI Technical Report SS-93-04. Compilation copyright 1993, AAAI (www.aaai.org). All rights reserved. Fast Information Distribution for Massively Parallel IDA* Search Diane J. Cook Department of

More information

An Appropriate Search Algorithm for Finding Grid Resources

An Appropriate Search Algorithm for Finding Grid Resources An Appropriate Search Algorithm for Finding Grid Resources Olusegun O. A. 1, Babatunde A. N. 2, Omotehinwa T. O. 3,Aremu D. R. 4, Balogun B. F. 5 1,4 Department of Computer Science University of Ilorin,

More information

Heuristic (Informed) Search

Heuristic (Informed) Search Heuristic (Informed) Search (Where we try to choose smartly) R&N: Chap., Sect..1 3 1 Search Algorithm #2 SEARCH#2 1. INSERT(initial-node,Open-List) 2. Repeat: a. If empty(open-list) then return failure

More information

mywbut.com Informed Search Strategies-I

mywbut.com Informed Search Strategies-I Informed Search Strategies-I 1 3.1 Introduction We have outlined the different types of search strategies. In the earlier chapter we have looked at different blind search strategies. Uninformed search

More information

Comparative Study of RBFS & ARBFS Algorithm

Comparative Study of RBFS & ARBFS Algorithm IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 78-066, p- ISSN: 78-877Volume 0, Issue 5 (Mar. - Apr. 0), PP 05-0 Comparative Study of RBFS & ARBFS Algorithm Disha Sharma, Sanjay Kumar Dubey (Information

More information

COMP9414: Artificial Intelligence Informed Search

COMP9414: Artificial Intelligence Informed Search COMP9, Wednesday March, 00 Informed Search COMP9: Artificial Intelligence Informed Search Wayne Wobcke Room J- wobcke@cse.unsw.edu.au Based on slides by Maurice Pagnucco Overview Heuristics Informed Search

More information

Last time: Problem-Solving

Last time: Problem-Solving Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation: Initial state??? 1 Last time: Problem-Solving Problem types:

More information

Moving On. 10. Single-agent Search. Applications. Why Alpha-Beta First?

Moving On. 10. Single-agent Search. Applications. Why Alpha-Beta First? Moving On 10. Single-agent Search Jonathan Schaeffer jonathan@cs.ualberta.ca www.cs.ualberta.ca/~jonathan Two-player adversary search is nice, but not all interesting problems can be mapped to games Large

More information

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit Lecture Notes, Advanced Artificial Intelligence (SCOM7341) Sina Institute, University of Birzeit 2 nd Semester, 2012 Advanced Artificial Intelligence (SCOM7341) Chapter 4 Informed Searching Dr. Mustafa

More information

Am Average-Case with Applications: Summary of

Am Average-Case with Applications: Summary of From: AAAI-92 Proceedings. Copyright 1992, AAAI (www.aaai.org). All rights reserved. Am Average-Case with Applications: Summary of Weixiong Zhang and Richard E. Korf * Computer Science Department University

More information

Admissible Search Methods for Minimum Penalty Sequencing of Jobs with Setup Times on One and Two Machines

Admissible Search Methods for Minimum Penalty Sequencing of Jobs with Setup Times on One and Two Machines Admissible Search Methods for Minimum Penalty Sequencing of Jobs with Setup Times on One and Two Machines Anup K. Sen, A. Bagchi and Bani K. Sinha Indian Institute of Management Calcutta Joka, Diamond

More information

Verifying a Border Array in Linear Time

Verifying a Border Array in Linear Time Verifying a Border Array in Linear Time František Franěk Weilin Lu P. J. Ryan W. F. Smyth Yu Sun Lu Yang Algorithms Research Group Department of Computing & Software McMaster University Hamilton, Ontario

More information

Informed search algorithms. Chapter 4, Sections 1 2 1

Informed search algorithms. Chapter 4, Sections 1 2 1 Informed search algorithms Chapter 4, Sections 1 2 Chapter 4, Sections 1 2 1 Outline Best-first search A search Heuristics Chapter 4, Sections 1 2 2 Review: Tree search function Tree-Search( problem, fringe)

More information

An Efficient Approximate Algorithm for Winner Determination in Combinatorial Auctions

An Efficient Approximate Algorithm for Winner Determination in Combinatorial Auctions An Efficient Approximate Algorithm for Winner Determination in Combinatorial Auctions Yuko Sakurai, Makoto Yokoo, and Koji Kamei NTT Communication Science Laboratories, 2-4 Hikaridai, Seika-cho, Soraku-gun,

More information

An Evolutionary Algorithm for the Multi-objective Shortest Path Problem

An Evolutionary Algorithm for the Multi-objective Shortest Path Problem An Evolutionary Algorithm for the Multi-objective Shortest Path Problem Fangguo He Huan Qi Qiong Fan Institute of Systems Engineering, Huazhong University of Science & Technology, Wuhan 430074, P. R. China

More information

1 The Traveling Salesperson Problem (TSP)

1 The Traveling Salesperson Problem (TSP) CS 598CSC: Approximation Algorithms Lecture date: January 23, 2009 Instructor: Chandra Chekuri Scribe: Sungjin Im In the previous lecture, we had a quick overview of several basic aspects of approximation

More information

Chapter 11 Search Algorithms for Discrete Optimization Problems

Chapter 11 Search Algorithms for Discrete Optimization Problems Chapter Search Algorithms for Discrete Optimization Problems (Selected slides) A. Grama, A. Gupta, G. Karypis, and V. Kumar To accompany the text Introduction to Parallel Computing, Addison Wesley, 2003.

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Informed Search and Exploration Chapter 4 (4.1 4.2) A General Search algorithm: Chapter 3: Search Strategies Task : Find a sequence of actions leading from the initial state to

More information

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Artificial Intelligence. Sina Institute, University of Birzeit

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Artificial Intelligence. Sina Institute, University of Birzeit Lecture Notes on Informed Searching University of Birzeit, Palestine 1 st Semester, 2014 Artificial Intelligence Chapter 4 Informed Searching Dr. Mustafa Jarrar Sina Institute, University of Birzeit mjarrar@birzeit.edu

More information

Vorlesung Grundlagen der Künstlichen Intelligenz

Vorlesung Grundlagen der Künstlichen Intelligenz Vorlesung Grundlagen der Künstlichen Intelligenz Reinhard Lafrenz / Prof. A. Knoll Robotics and Embedded Systems Department of Informatics I6 Technische Universität München www6.in.tum.de lafrenz@in.tum.de

More information

Chapter S:IV. IV. Informed Search

Chapter S:IV. IV. Informed Search Chapter S:IV IV. Informed Search Best-First Search Best-First Search for State-Space Graphs Cost Functions for State-Space Graphs Evaluation of State-Space Graphs Algorithm A* BF* Variants Hybrid Strategies

More information

Informed Search and Exploration

Informed Search and Exploration Ch. 03 p.1/47 Informed Search and Exploration Sections 3.5 and 3.6 Ch. 03 p.2/47 Outline Best-first search A search Heuristics, pattern databases IDA search (Recursive Best-First Search (RBFS), MA and

More information

Efficient SQL-Querying Method for Data Mining in Large Data Bases

Efficient SQL-Querying Method for Data Mining in Large Data Bases Efficient SQL-Querying Method for Data Mining in Large Data Bases Nguyen Hung Son Institute of Mathematics Warsaw University Banacha 2, 02095, Warsaw, Poland Abstract Data mining can be understood as a

More information

Lecture 3: Graphs and flows

Lecture 3: Graphs and flows Chapter 3 Lecture 3: Graphs and flows Graphs: a useful combinatorial structure. Definitions: graph, directed and undirected graph, edge as ordered pair, path, cycle, connected graph, strongly connected

More information

ITS: An Ecient Limited-Memory Heuristic Tree Search Algorithm. Subrata Ghosh Ambuj Mahanti Dana S. Nau

ITS: An Ecient Limited-Memory Heuristic Tree Search Algorithm. Subrata Ghosh Ambuj Mahanti Dana S. Nau ITS: An Ecient Limited-Memory Heuristic Tree Search Algorithm Subrata Ghosh Ambuj Mahanti Dana S. Nau Department of Computer Science IIM, Calcutta Dept. of Computer Science, and University of Maryland

More information

Class Overview. Introduction to Artificial Intelligence COMP 3501 / COMP Lecture 2. Problem Solving Agents. Problem Solving Agents: Assumptions

Class Overview. Introduction to Artificial Intelligence COMP 3501 / COMP Lecture 2. Problem Solving Agents. Problem Solving Agents: Assumptions Class Overview COMP 3501 / COMP 4704-4 Lecture 2 Prof. JGH 318 Problem Solving Agents Problem Solving Agents: Assumptions Requires a goal Assume world is: Requires actions Observable What actions? Discrete

More information

Effective use of memory in linear space best first search

Effective use of memory in linear space best first search Effective use of memory in linear space best first search Marko Robnik-Šikonja University of Ljubljana, Faculty of Electrical Engineering and Computer Science, Tržaška 5, 00 Ljubljana, Slovenia e-mail:

More information

Lecture 5 Heuristics. Last Time: A* Search

Lecture 5 Heuristics. Last Time: A* Search CSE 473 Lecture 5 Heuristics CSE AI Faculty Last Time: A* Search Use an evaluation function f(n) for node n. f(n) = estimated total cost of path thru n to goal f(n) = g(n) + h(n) g(n) = cost so far to

More information

Heuristic Search. CPSC 470/570 Artificial Intelligence Brian Scassellati

Heuristic Search. CPSC 470/570 Artificial Intelligence Brian Scassellati Heuristic Search CPSC 470/570 Artificial Intelligence Brian Scassellati Goal Formulation 200 Denver 300 200 200 Chicago 150 200 Boston 50 1200 210 75 320 255 Key West New York Well-defined function that

More information

A New Approach of Iterative Deepening Bi- Directional Heuristic Front-to-Front Algorithm (IDBHFFA)

A New Approach of Iterative Deepening Bi- Directional Heuristic Front-to-Front Algorithm (IDBHFFA) International Journal of Electrical & Computer Sciences IJECS-IJENS Vol:10 No:02 12 A New Approach of Iterative Deepening Bi- Directional Heuristic Front-to-Front Algorithm (IDBHFFA) First A. Kazi Shamsul

More information

IMPROVED A* ALGORITHM FOR QUERY OPTIMIZATION

IMPROVED A* ALGORITHM FOR QUERY OPTIMIZATION IMPROVED A* ALGORITHM FOR QUERY OPTIMIZATION Amit Goyal Ashish Thakral G.K. Sharma Indian Institute of Information Technology and Management, Gwalior. Morena Link Road, Gwalior, India. E-mail: amitgoyal@iiitm.ac.in

More information

The p-sized partitioning algorithm for fast computation of factorials of numbers

The p-sized partitioning algorithm for fast computation of factorials of numbers J Supercomput (2006) 38:73 82 DOI 10.1007/s11227-006-7285-5 The p-sized partitioning algorithm for fast computation of factorials of numbers Ahmet Ugur Henry Thompson C Science + Business Media, LLC 2006

More information

Computing the Longest Common Substring with One Mismatch 1

Computing the Longest Common Substring with One Mismatch 1 ISSN 0032-9460, Problems of Information Transmission, 2011, Vol. 47, No. 1, pp. 1??. c Pleiades Publishing, Inc., 2011. Original Russian Text c M.A. Babenko, T.A. Starikovskaya, 2011, published in Problemy

More information

Common Misconceptions Concerning Heuristic Search

Common Misconceptions Concerning Heuristic Search Common Misconceptions Concerning Heuristic Search Robert C. Holte Computing Science Department, University of Alberta Edmonton, Canada T6G 2E8 (holte@cs.ualberta.ca Abstract This paper examines the following

More information

COMP9414/ 9814/ 3411: Artificial Intelligence. 5. Informed Search. Russell & Norvig, Chapter 3. UNSW c Alan Blair,

COMP9414/ 9814/ 3411: Artificial Intelligence. 5. Informed Search. Russell & Norvig, Chapter 3. UNSW c Alan Blair, COMP9414/ 9814/ 3411: Artificial Intelligence 5. Informed Search Russell & Norvig, Chapter 3. COMP9414/9814/3411 15s1 Informed Search 1 Search Strategies General Search algorithm: add initial state to

More information

Limited Automata and Unary Languages

Limited Automata and Unary Languages Limited Automata and Unary Languages Giovanni Pighizzini and Luca Prigioniero Dipartimento di Informatica, Università degli Studi di Milano, Italy {pighizzini,prigioniero}@di.unimi.it Abstract. Limited

More information

Integrating Machine Learning in Parallel Heuristic Search

Integrating Machine Learning in Parallel Heuristic Search From: Proceedings of the Eleventh International FLAIRS Conference. Copyright 1998, AAAI (www.aaai.org). All rights reserved. Integrating Machine Learning in Parallel Heuristic Search R. Craig Varnell Stephen

More information

A Simpler Proof Of The Average Case Complexity Of Union-Find. With Path Compression

A Simpler Proof Of The Average Case Complexity Of Union-Find. With Path Compression LBNL-57527 A Simpler Proof Of The Average Case Complexity Of Union-Find With Path Compression Kesheng Wu and Ekow Otoo Lawrence Berkeley National Laboratory, Berkeley, CA, USA {KWu, EJOtoo}@lbl.gov April

More information

The Optimal Locking Problem in a Directed Acyclic Graph

The Optimal Locking Problem in a Directed Acyclic Graph March 1981 Report. No. STAN-CS-81-847 PB96-150685 The Optimal Locking Problem in a Directed Acyclic Graph by Henry F. Korth Research sponsored in part by Air Force Office of Scientific Research Department

More information

Search: Advanced Topics and Conclusion

Search: Advanced Topics and Conclusion Search: Advanced Topics and Conclusion CPSC 322 Lecture 8 January 20, 2006 Textbook 2.6 Search: Advanced Topics and Conclusion CPSC 322 Lecture 8, Slide 1 Lecture Overview Recap Branch & Bound A Tricks

More information

Informed search. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016

Informed search. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016 Informed search CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016 Soleymani Artificial Intelligence: A Modern Approach, Chapter 3 Outline Best-first search Greedy

More information

CMU-Q Lecture 2: Search problems Uninformed search. Teacher: Gianni A. Di Caro

CMU-Q Lecture 2: Search problems Uninformed search. Teacher: Gianni A. Di Caro CMU-Q 15-381 Lecture 2: Search problems Uninformed search Teacher: Gianni A. Di Caro RECAP: ACT RATIONALLY Think like people Think rationally Agent Sensors? Actuators Percepts Actions Environment Act like

More information

Informed Search and Exploration

Informed Search and Exploration Ch. 03b p.1/51 Informed Search and Exploration Sections 3.5 and 3.6 Nilufer Onder Department of Computer Science Michigan Technological University Ch. 03b p.2/51 Outline Best-first search A search Heuristics,

More information

Finding the Depth of a Flow Graph*

Finding the Depth of a Flow Graph* JOURNAL OF COMPUTER AND SYSTEI~! SCIENCES 15, 300-309 (I 977) Finding the Depth of a Flow Graph* AMELIA C. FONG Department of Computer Science, University of Toronto, Toronto, Ontario, Canada AND JEFFREY

More information

Artificial Intelligence p.1/49. n-queens. Artificial Intelligence p.2/49. Initial state: the empty board or a board with n random

Artificial Intelligence p.1/49. n-queens. Artificial Intelligence p.2/49. Initial state: the empty board or a board with n random Example: n-queens Put n queens on an n n board with no two queens on the same row, column, or diagonal A search problem! State space: the board with 0 to n queens Initial state: the empty board or a board

More information

A.I.: Informed Search Algorithms. Chapter III: Part Deux

A.I.: Informed Search Algorithms. Chapter III: Part Deux A.I.: Informed Search Algorithms Chapter III: Part Deux Best-first search Greedy best-first search A * search Heuristics Outline Overview Informed Search: uses problem-specific knowledge. General approach:

More information

A Note on the Succinctness of Descriptions of Deterministic Languages

A Note on the Succinctness of Descriptions of Deterministic Languages INFORMATION AND CONTROL 32, 139-145 (1976) A Note on the Succinctness of Descriptions of Deterministic Languages LESLIE G. VALIANT Centre for Computer Studies, University of Leeds, Leeds, United Kingdom

More information

Parallel Best-First Search of State-Space Graphs: A Summary of Results *

Parallel Best-First Search of State-Space Graphs: A Summary of Results * From: AAAI-88 Proceedings. Copyright 1988, AAAI (www.aaai.org). All rights reserved. Parallel Best-First Search of State-Space Graphs: A Summary of Results * Vi+ I$umari K. Ramesh, and V. Nageshwara Rae

More information

Planning Techniques for Robotics Search Algorithms: Multi-goal A*, IDA*

Planning Techniques for Robotics Search Algorithms: Multi-goal A*, IDA* 6-350 Planning Techniques for Robotics Search Algorithms: Multi-goal A*, IDA* Maxim Likhachev Robotics Institute Agenda A* with multiple goals Iterative Deepening A* (IDA*) 2 Support for Multiple Goal

More information

Graph Algorithms. Tours in Graphs. Graph Algorithms

Graph Algorithms. Tours in Graphs. Graph Algorithms Graph Algorithms Tours in Graphs Graph Algorithms Special Paths and Cycles in Graphs Euler Path: A path that traverses all the edges of the graph exactly once. Euler Cycle: A cycle that traverses all the

More information

COMP9414: Artificial Intelligence Informed Search

COMP9414: Artificial Intelligence Informed Search COMP9, Monday 9 March, 0 Informed Search COMP9: Artificial Intelligence Informed Search Wayne Wobcke Room J- wobcke@cse.unsw.edu.au Based on slides by Maurice Pagnucco Overview Heuristics Informed Search

More information

Search Algorithms for Discrete Optimization Problems

Search Algorithms for Discrete Optimization Problems Search Algorithms for Discrete Optimization Problems Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar To accompany the text ``Introduction to Parallel Computing'', Addison Wesley, 2003. Topic

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

CS473-Algorithms I. Lecture 11. Greedy Algorithms. Cevdet Aykanat - Bilkent University Computer Engineering Department

CS473-Algorithms I. Lecture 11. Greedy Algorithms. Cevdet Aykanat - Bilkent University Computer Engineering Department CS473-Algorithms I Lecture 11 Greedy Algorithms 1 Activity Selection Problem Input: a set S {1, 2,, n} of n activities s i =Start time of activity i, f i = Finish time of activity i Activity i takes place

More information

1 Non greedy algorithms (which we should have covered

1 Non greedy algorithms (which we should have covered 1 Non greedy algorithms (which we should have covered earlier) 1.1 Floyd Warshall algorithm This algorithm solves the all-pairs shortest paths problem, which is a problem where we want to find the shortest

More information

Informed Search. CS 486/686 University of Waterloo May 10. cs486/686 Lecture Slides 2005 (c) K. Larson and P. Poupart

Informed Search. CS 486/686 University of Waterloo May 10. cs486/686 Lecture Slides 2005 (c) K. Larson and P. Poupart Informed Search CS 486/686 University of Waterloo May 0 Outline Using knowledge Heuristics Best-first search Greedy best-first search A* search Other variations of A* Back to heuristics 2 Recall from last

More information

CS510 \ Lecture Ariel Stolerman

CS510 \ Lecture Ariel Stolerman CS510 \ Lecture02 2012-10-03 1 Ariel Stolerman Midterm Evan will email about that after the lecture, at least 2 lectures from now. The exam will be given in a regular PDF (not an online form). We will

More information

Problem Solving: Informed Search

Problem Solving: Informed Search Problem Solving: Informed Search References Russell and Norvig, Artificial Intelligence: A modern approach, 2nd ed. Prentice Hall, 2003 (Chapters 1,2, and 4) Nilsson, Artificial intelligence: A New synthesis.

More information

A Chosen-Plaintext Linear Attack on DES

A Chosen-Plaintext Linear Attack on DES A Chosen-Plaintext Linear Attack on DES Lars R. Knudsen and John Erik Mathiassen Department of Informatics, University of Bergen, N-5020 Bergen, Norway {lars.knudsen,johnm}@ii.uib.no Abstract. In this

More information

Searching a Sorted Set of Strings

Searching a Sorted Set of Strings Department of Mathematics and Computer Science January 24, 2017 University of Southern Denmark RF Searching a Sorted Set of Strings Assume we have a set of n strings in RAM, and know their sorted order

More information

Searching with Partial Information

Searching with Partial Information Searching with Partial Information Above we (unrealistically) assumed that the environment is fully observable and deterministic Moreover, we assumed that the agent knows what the effects of each action

More information

Trees. 3. (Minimally Connected) G is connected and deleting any of its edges gives rise to a disconnected graph.

Trees. 3. (Minimally Connected) G is connected and deleting any of its edges gives rise to a disconnected graph. Trees 1 Introduction Trees are very special kind of (undirected) graphs. Formally speaking, a tree is a connected graph that is acyclic. 1 This definition has some drawbacks: given a graph it is not trivial

More information

CS:4420 Artificial Intelligence

CS:4420 Artificial Intelligence CS:4420 Artificial Intelligence Spring 2018 Informed Search Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed by Stuart

More information

The Optimal Locking Problem in a Directed Acyclic Graph

The Optimal Locking Problem in a Directed Acyclic Graph March 1981 Report. No. STAN-CS-81-847. The Optimal Locking Problem in a Directed Acyclic Graph by Henry ;. Korth Air Force Office of Scientific Rcscarch Department of Computer Science Stanford University

More information

Learning Objectives. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 3.5, Page 1

Learning Objectives. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 3.5, Page 1 Learning Objectives At the end of the class you should be able to: justify why depth-bounded search is useful demonstrate how iterative-deepening works for a particular problem demonstrate how depth-first

More information

CS 771 Artificial Intelligence. Problem Solving by Searching Uninformed search

CS 771 Artificial Intelligence. Problem Solving by Searching Uninformed search CS 771 Artificial Intelligence Problem Solving by Searching Uninformed search Complete architectures for intelligence? Search? Solve the problem of what to do. Learning? Learn what to do. Logic and inference?

More information

Heuristic Algorithms for Multiconstrained Quality-of-Service Routing

Heuristic Algorithms for Multiconstrained Quality-of-Service Routing 244 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL 10, NO 2, APRIL 2002 Heuristic Algorithms for Multiconstrained Quality-of-Service Routing Xin Yuan, Member, IEEE Abstract Multiconstrained quality-of-service

More information

ifn >-- 2 is even, ifn is odd, ifn =0,

ifn >-- 2 is even, ifn is odd, ifn =0, SIAM J. ALG. DISC. METH. Voi. 7, No. 1, January 1986 1986 Society for Industrial and Applied Mathematics 015 THE NUMBER OF MAXIMAL INDEPENDENT SETS IN A TREE* HERBERT S. WILFf Abstract. can have. We find

More information

Informed search methods

Informed search methods CS 2710 Foundations of AI Lecture 5 Informed search methods Milos Hauskrecht milos@pitt.edu 5329 Sennott Square Announcements Homework assignment 2 is out Due on Tuesday, September 19, 2017 before the

More information

An Algorithm for Solving the Traveling Salesman Problem

An Algorithm for Solving the Traveling Salesman Problem JKAU: Eng. Sci.. vol. 4, pp. 117 122 (1412 A.H.l1992 A.D.) An Algorithm for Solving the Traveling Salesman Problem M. HAMED College ofarts and Science, Bahrain University, [sa Town, Bahrain ABSTRACT. The

More information

Are Fibonacci Heaps Optimal? Diab Abuaiadh and Jeffrey H. Kingston ABSTRACT

Are Fibonacci Heaps Optimal? Diab Abuaiadh and Jeffrey H. Kingston ABSTRACT Are Fibonacci Heaps Optimal? Diab Abuaiadh and Jeffrey H. Kingston ABSTRACT In this paper we investigate the inherent complexity of the priority queue abstract data type. We show that, under reasonable

More information

Power domination in block graphs

Power domination in block graphs Theoretical Computer Science 359 (2006) 299 305 www.elsevier.com/locate/tcs Power domination in block graphs Guangjun Xu, Liying Kang, Erfang Shan, Min Zhao Department of Mathematics, Shanghai University,

More information

Informed search algorithms

Informed search algorithms Artificial Intelligence Topic 4 Informed search algorithms Best-first search Greedy search A search Admissible heuristics Memory-bounded search IDA SMA Reading: Russell and Norvig, Chapter 4, Sections

More information

PLT- Positional Lexicographic Tree: A New Structure for Mining Frequent Itemsets

PLT- Positional Lexicographic Tree: A New Structure for Mining Frequent Itemsets PLT- Positional Lexicographic Tree: A New Structure for Mining Frequent Itemsets Azzedine Boukerche and Samer Samarah School of Information Technology & Engineering University of Ottawa, Ottawa, Canada

More information

Chapter S:II. II. Search Space Representation

Chapter S:II. II. Search Space Representation Chapter S:II II. Search Space Representation Systematic Search Encoding of Problems State-Space Representation Problem-Reduction Representation Choosing a Representation S:II-1 Search Space Representation

More information

This lecture. Lecture 6: Search 5. Other Time and Space Variations of A* Victor R. Lesser. RBFS - Recursive Best-First Search Algorithm

This lecture. Lecture 6: Search 5. Other Time and Space Variations of A* Victor R. Lesser. RBFS - Recursive Best-First Search Algorithm Lecture 6: Search 5 Victor R. Lesser CMPSCI 683 Fall 2010 This lecture Other Time and Space Variations of A* Finish off RBFS SMA* Anytime A* RTA* (maybe if have time) RBFS - Recursive Best-First Search

More information

On the Space-Time Trade-off in Solving Constraint Satisfaction Problems*

On the Space-Time Trade-off in Solving Constraint Satisfaction Problems* Appeared in Proc of the 14th Int l Joint Conf on Artificial Intelligence, 558-56, 1995 On the Space-Time Trade-off in Solving Constraint Satisfaction Problems* Roberto J Bayardo Jr and Daniel P Miranker

More information

Combinatorial Problems on Strings with Applications to Protein Folding

Combinatorial Problems on Strings with Applications to Protein Folding Combinatorial Problems on Strings with Applications to Protein Folding Alantha Newman 1 and Matthias Ruhl 2 1 MIT Laboratory for Computer Science Cambridge, MA 02139 alantha@theory.lcs.mit.edu 2 IBM Almaden

More information

Constructing arbitrarily large graphs with a specified number of Hamiltonian cycles

Constructing arbitrarily large graphs with a specified number of Hamiltonian cycles Electronic Journal of Graph Theory and Applications 4 (1) (2016), 18 25 Constructing arbitrarily large graphs with a specified number of Hamiltonian cycles Michael School of Computer Science, Engineering

More information

Bound Consistency for Binary Length-Lex Set Constraints

Bound Consistency for Binary Length-Lex Set Constraints Bound Consistency for Binary Length-Lex Set Constraints Pascal Van Hentenryck and Justin Yip Brown University, Box 1910 Carmen Gervet Boston University, Providence, RI 02912 808 Commonwealth Av. Boston,

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

Lecture 1. 1 Notation

Lecture 1. 1 Notation Lecture 1 (The material on mathematical logic is covered in the textbook starting with Chapter 5; however, for the first few lectures, I will be providing some required background topics and will not be

More information

A New Approach of Iterative Deepening Bi- Directional Heuristic Front-to-Front Algorithm (IDBHFFA)

A New Approach of Iterative Deepening Bi- Directional Heuristic Front-to-Front Algorithm (IDBHFFA) International Journal of Electrical & Computer Sciences IJECS-IJENS Vol:10 No: 02 13 A New Approach of Iterative Deepening Bi- Directional Heuristic Front-to-Front Algorithm (IDBHFFA) First A. Kazi Shamsul

More information

Depth-First. Abstract. Introduction

Depth-First. Abstract. Introduction From: AAAI-91 Proceedings. Copyright 1991, AAAI (www.aaai.org). All rights reserved. Depth-First vs Nageshwara Rae Vempaty Vipin Kumar* ichard IL orft Dept. of Computer Sciences, Computer Science Dept.,

More information

Informed Search Algorithms

Informed Search Algorithms Informed Search Algorithms CITS3001 Algorithms, Agents and Artificial Intelligence Tim French School of Computer Science and Software Engineering The University of Western Australia 2017, Semester 2 Introduction

More information

Framework for Design of Dynamic Programming Algorithms

Framework for Design of Dynamic Programming Algorithms CSE 441T/541T Advanced Algorithms September 22, 2010 Framework for Design of Dynamic Programming Algorithms Dynamic programming algorithms for combinatorial optimization generalize the strategy we studied

More information

Effects of Module Encapsulation in Repetitively Modular Genotypes on the Search Space

Effects of Module Encapsulation in Repetitively Modular Genotypes on the Search Space Effects of Module Encapsulation in Repetitively Modular Genotypes on the Search Space Ivan I. Garibay 1,2, Ozlem O. Garibay 1,2, and Annie S. Wu 1 1 University of Central Florida, School of Computer Science,

More information

Generating edge covers of path graphs

Generating edge covers of path graphs Generating edge covers of path graphs J. Raymundo Marcial-Romero, J. A. Hernández, Vianney Muñoz-Jiménez and Héctor A. Montes-Venegas Facultad de Ingeniería, Universidad Autónoma del Estado de México,

More information

Primes in Classes of the Iterated Totient Function

Primes in Classes of the Iterated Totient Function 1 2 3 47 6 23 11 Journal of Integer Sequences, Vol. 11 (2008), Article 08.1.2 Primes in Classes of the Iterated Totient Function Tony D. Noe 14025 NW Harvest Lane Portland, OR 97229 USA noe@sspectra.com

More information

Approximability Results for the p-center Problem

Approximability Results for the p-center Problem Approximability Results for the p-center Problem Stefan Buettcher Course Project Algorithm Design and Analysis Prof. Timothy Chan University of Waterloo, Spring 2004 The p-center

More information

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence Foundations of Artificial Intelligence 4. Informed Search Methods Heuristics, Local Search Methods, Genetic Algorithms Joschka Boedecker and Wolfram Burgard and Bernhard Nebel Albert-Ludwigs-Universität

More information