Discrete search algorithms

Similar documents
ME/CS 132: Advanced Robotics: Navigation and Vision

Other course stuff. CSCI 599: Coordinated Mobile Robotics Prof. Nora Ayanian

Class Overview. Introduction to Artificial Intelligence COMP 3501 / COMP Lecture 2: Search. Problem Solving Agents

Sub-Optimal Heuristic Search ARA* and Experience Graphs

Discrete Motion Planning

Uninformed Search Methods

Class Overview. Introduction to Artificial Intelligence COMP 3501 / COMP Lecture 2. Problem Solving Agents. Problem Solving Agents: Assumptions

6.141: Robotics systems and science Lecture 10: Implementing Motion Planning

Path Planning. Marcello Restelli. Dipartimento di Elettronica e Informazione Politecnico di Milano tel:

ECE276B: Planning & Learning in Robotics Lecture 5: Configuration Space

Search Algorithms for Discrete Optimization Problems

Lesson 1 Introduction to Path Planning Graph Searches: BFS and DFS

Visibility Graph. How does a Mobile Robot get from A to B?

Sung-Eui Yoon ( 윤성의 )

CS4733 Class Notes. 1 2-D Robot Motion Planning Algorithm Using Grown Obstacles

Probabilistic Methods for Kinodynamic Path Planning

6.141: Robotics systems and science Lecture 10: Motion Planning III

15-494/694: Cognitive Robotics

Chapter 3. A problem-solving agent is a kind of goal-based agent. It decide what to do by finding sequences of actions that lead to desirable states.

Can work in a group of at most 3 students.! Can work individually! If you work in a group of 2 or 3 students,!

Assignment 1 is out! Due: 9 Sep 23:59! Can work in a group of 2-3 students.! NO cheating!!!! Submit in turnitin! Code + report!

Informed Search CS457 David Kauchak Fall 2011

An Appropriate Search Algorithm for Finding Grid Resources

UNIT 4 Branch and Bound

Data Structure. IBPS SO (IT- Officer) Exam 2017

CS510 \ Lecture Ariel Stolerman

Discrete planning (an introduction)

Heuristic Search and Advanced Methods

Spring 2010: Lecture 9. Ashutosh Saxena. Ashutosh Saxena

Planning in Mobile Robotics

Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization

CS 310 Advanced Data Structures and Algorithms

Solving Problems by Searching

UNINFORMED SEARCH. Announcements Reading Section 3.4, especially 3.4.1, 3.4.2, 3.4.3, 3.4.5

LECTURE 11 TREE TRAVERSALS

Motion Planning, Part III Graph Search, Part I. Howie Choset

Basic Search Algorithms

DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI. Department of Computer Science and Engineering CS6301 PROGRAMMING DATA STRUCTURES II

Lecture 3. Brute Force

Motion Planning for a Point Robot (2/2) Point Robot on a Grid. Planning requires models. Point Robot on a Grid 1/18/2012.

Algorithm Design (8) Graph Algorithms 1/2

Lecture 3 - States and Searching

EE631 Cooperating Autonomous Mobile Robots

mywbut.com Informed Search Strategies-I

Algorithm Design Techniques (III)

Search Algorithms for Discrete Optimization Problems

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 19 January, 2018

Pathfinding Algorithms and Implementations on Grid Map

CMU-Q Lecture 2: Search problems Uninformed search. Teacher: Gianni A. Di Caro

HW#1 due today. HW#2 due Monday, 9/09/13, in class Continue reading Chapter 3

Introduction to Information Science and Technology (IST) Part IV: Intelligent Machines and Robotics Planning

ROOT A node which doesn t have a parent. In the above tree. The Root is A. LEAF A node which doesn t have children is called leaf or Terminal node.

Analysis of Algorithms

Algorithms and Path Planning

Navigation and Metric Path Planning

Lecture Notes. char myarray [ ] = {0, 0, 0, 0, 0 } ; The memory diagram associated with the array can be drawn like this

The wolf sheep cabbage problem. Search. Terminology. Solution. Finite state acceptor / state space

6.141: Robotics systems and science Lecture 9: Configuration Space and Motion Planning

Lecture 3: Motion Planning (cont.)

Search Algorithms. IE 496 Lecture 17

Artificial Intelligence Prof. Deepak Khemani Department of Computer Science and Engineering Indian Institute of Technology, Madras

Planning, Execution and Learning Application: Examples of Planning for Mobile Manipulation and Articulated Robots

CS301 - Data Structures Glossary By

Search : Lecture 2. September 9, 2003

Advanced Robotics Path Planning & Navigation

Topics. Trees Vojislav Kecman. Which graphs are trees? Terminology. Terminology Trees as Models Some Tree Theorems Applications of Trees CMSC 302

Chapter 12. Path Planning. Beard & McLain, Small Unmanned Aircraft, Princeton University Press, 2012,

Chapter 3 Solving problems by searching

Artificial Intelligence

Motion Planning. COMP3431 Robot Software Architectures

FORTH SEMESTER DIPLOMA EXAMINATION IN ENGINEERING/ TECHNOLIGY- MARCH, 2012 DATA STRUCTURE (Common to CT and IF) [Time: 3 hours

Module 2: Classical Algorithm Design Techniques

A2 Uninformed Search

Example: The 8-puzzle. A2 Uninformed Search. It can be generalized to 15-puzzle, 24-puzzle, or. (n 2 1)-puzzle for n 6. Department of Computer Science

CS8391-DATA STRUCTURES QUESTION BANK UNIT I

Game AI: The set of algorithms, representations, tools, and tricks that support the creation and management of real-time digital experiences

Parallel Physically Based Path-tracing and Shading Part 3 of 2. CIS565 Fall 2012 University of Pennsylvania by Yining Karl Li

Robotics Tasks. CS 188: Artificial Intelligence Spring Manipulator Robots. Mobile Robots. Degrees of Freedom. Sensors and Effectors

CMU-Q Lecture 4: Path Planning. Teacher: Gianni A. Di Caro

MEV 442: Introduction to Robotics - Module 3 INTRODUCTION TO ROBOT PATH PLANNING

Route planning / Search Movement Group behavior Decision making

Trees and Tree Traversal

Chapter 3: Search. c D. Poole, A. Mackworth 2010, W. Menzel 2015 Artificial Intelligence, Chapter 3, Page 1

Chapter 9. Priority Queue

Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS Cut back on A* optimality detail; a bit more on importance of heuristics,

ME512: Mobile Robotics Path Planning Algorithms. Atul Thakur, Assistant Professor Mechanical Engineering Department IIT Patna

Motion Planning 2D. Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo

Set 2: State-spaces and Uninformed Search. ICS 271 Fall 2015 Kalev Kask

9/17/2015 7:56 AM. CSCE 625 Programing Assignment #1 due: Tues, Sep 22 (by start of class) Objective

Search: Advanced Topics and Conclusion

Notes. Video Game AI: Lecture 5 Planning for Pathfinding. Lecture Overview. Knowledge vs Search. Jonathan Schaeffer this Friday

CHAPTER SIX. the RRM creates small covering roadmaps for all tested environments.

UNIT 5 GRAPH. Application of Graph Structure in real world:- Graph Terminologies:

Search in discrete and continuous spaces

Chapter 11 Search Algorithms for Discrete Optimization Problems

CS 512: Comments on Graph Search 16:198:512 Instructor: Wes Cowan

TIE Graph algorithms

GRAPHICAL ALGORITHMS. UNIT _II Lecture-12 Slides No. 3-7 Lecture Slides No Lecture Slides No

Learning Objectives. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 3.2, Page 1

Parallel Programming. Parallel algorithms Combinatorial Search

Transcription:

Robot Autonomy (16-662, S13) Lecture#08 (Monday February 11) Discrete search algorithms Lecturer: Siddhartha Srinivasa Scribes: Kavya Suresh & David Matten I. INTRODUCTION These notes contain a detailed explanation of the algorithms involved in discrete search. In the following sections we describe, in detail, the piano movers problem, discrete feasible planning and the different forward searching methods. 1. Piano movers problem 2. Some basic terms 3. Discrete feasible planning a. Introduction of discrete feasible planning b. Discrete feasible planning algorithm model 4. Forward searching methods a. Breadth First b. Depth First c. Dijkstra s algorithm, d. A-Star e. Best First f. Iterative deepening II. PIANO MOVERS PROBLEM The piano movers problem is known as the classic path planning problem. Given an open subset in an n- dimensional space and two subsets, C 0 and C 1, of U, is it possible to move from C 0 to C 1 while remaining entirely inside U? (1) i.e. if you are given a rigid body and its obstacles, the goal is to find a collision free path from the initial configuration to the final configuration. The obstacles here are not moving and the planning is done offline i.e. the planning is done before implementation of the program. (2) In simple words, given a set of obstacles and the initial and final position of the robot, the goal is to find a path that moves the robot from the initial to the final position and simultaneously avoids obstacles. (1) No point in the robot should collide or come in contact with the obstacle. Doing so will cause the motion to fail. This demands that every point on the robot must be represented. This is called the configuration of the robot. The C space has the same dimensions as the degrees of freedom (DOF). In the case of the piano, the x,y,z position and roll, pitch and yaw represent the 6 DOFs. (2)

INTRODUCTION TO DISCRETE SEARCH The piano movers problem involves planning a path from an initial position, q i, to a final position, q f. In the continuous domain this problem is classified as PSPACE HARD. In order to simplify the problem, we shift C-space into the discrete spectrum. This is done by overlaying a discrete grid over the C-space and marking all discrete locations that overlap C obstacle as containing an obstacle. An example of this can be seen in. Figure 1: Converting from continuous C-space to discrete C-space, any cell that contains C- obstacle is made into an obstacle cell. This transition creates a finite and countable number of states, x, in the original state space X. There are actions, u, which can be performed on x, and can be represented as U(x). For a 2D discrete problem, some simple examples of u would be the commands move up, move right, move left and move down. For any given state x, there is a state transition function which converts x into a subsequent state, x, given an action and a current state. An example of a state transition function, f(x,u), where x are the coordinates (2, 1) and u is the command to move up such that U(x) is (0, 1), would be x = (2, 1) + (0, 1) = (2, 2). BASIC TERMINOLOGY 1. Work space: The workspace is defined as the space which contains obstacles where the robot is said to move. 2. Configuration space: The vector space that defined the system configuration is called as the configuration space or the C space. This is the space that contains the robot and its surroundings. (3) The C space contains the robot s current position, the forbidden space which is the region of collision of the robot with the obstacles and the free space which is the region as which the robot is at rest. (1) III. DISCRETE FEASIBLE PLANNING a. Introduction to discrete feasible planning (3) Each and every distinct situation in the world can be referred to as a state represented as x. The set of all states is called a state space and is represented as X. This set, X, must be finite and must not contain unwanted information as this can cause problems in planning. The planner chooses a set of actions represented as u, and these actions, when applied on a set of current states, produce a new state. A state transition equation is represented by f given by x = f(x,u), as mentioned above. An

action space is the set of all actions that can be applied, starting from x, to produce x. The set of all actions applied on all states is called U. U = U xϵx U(x). The discrete planning algorithm is defined as the set of actions which will cause the initial state x I to reach the final state x G. b. Discrete feasible planning algorithm model (3) 1. A nonempty state space X, which is a finite set of states. 2. For each state x ϵ X, a finite action space U(x). 3. A state transition function f that produces a state f(x, u) ϵ X for every x ϵ X and u ϵ U(x). The state transition equation is derived from f as x = f(x, u). 4. An initial state x I ϵ X. 5. A goal set X G C X. IV. FORWARD SEARCHING METHOD Given a starting state, x i, and an ending state, x g, the goal is to find a feasible path from x i to x g. One method of solving this problem is to perform a forward (or backward) search. A forward search starts from an initial position and probes until it enters the goal state. A simple algorithm that performs forward search can be seen in Figure 2. In this algorithm there is a priority queue (Q) which is a list of states (x) sorted based on a priority function. Q has two sub-functions, Q.insert(x) and Q.getfirst().Q.insert adds a state to the queue. For example in an assembly when people are standing in ascending order of height, when a tall person comes he/she is added to the last spot in the line. Q.getfirst() returns the first element from the queue, sticking to the same example, the output returned would be the shortest person (this depends on the queue). Q.insert(x i ) and mark x i as visited while Q is not empty do: x = Q.getfirst() if x ϵ x g : return success else for all u: x = f(x, u) if x not visited: mark x as visited Q.insert(x ) return failure Figure 2: The basic algorithm for a forward search. The algorithm depicted above runs as follows: While the queue is not empty, pop the topmost element from it. If x is within the goal then declare success. If x is not within the goal, for all the actions that are feasible for the element just popped out of the queue, compute x as f(x,u). This algorithm keeps track of all the states that have already been visited, marks unvisited states as visited, and inserts them into the Q. Despite completing all of the above steps, if the goal is not reached return failure. The forward search algorithm constantly pops elements out of the queue, checks if it is in the goal, and, if it isn t it computes

who the children are (by looking at actions that can be performed), and adds them into the Q. Manipulating the priority function provides different ways to solve the problem. By modifying the queue type the search is performed in different ways. Two basic queue forms which are used are the FIFO (First In First Out) and the LIFO (Last In First Out) structures. While these methods both produce results, the results aren t guaranteed to be optimal. Optimal planning algorithms exist which find the optimal path from q i to q g. BASIC TERMINOLOGY 1. Unvisited: The states that have not been visited even once are referred to as unvisited states. 2. Next: If x is the current state, x is the next state for which x = f(u,x) where u is an action that exists for this transformation. 3. Dead: A dead state is a state, along with the states following it, that have been visited and no suitable path could be found to reach the goal. 4. Alive: Alive states are states that have been visited but the next states have not been visited, leaving a possibility of finding a feasible plan. Alive states are always stored in a queue. The order with which the items of the queue are called out or accessed is determined algorithmically. (3) Explanation BREADTH FIRST Breadth first search utilizes a FIFO type queue. In a FIFO queue the first element that is added is the first element that is removed. FIFO produces an algorithm called breadth first. This queue structure is similar to a service line at a grocery store or cafeteria, where the first person in line is the first person to be served. This algorithm grows sub-trees in all directions and stops when it reaches the goal state. This algorithm can be slow in reaching the goal, but the path will generally be fairly short and branches will be fairly distributed in all directions. This method will generally look something like a bull s-eye, as seen in Figure 3Figure 5. Figure 3: General shape of a breadth first search. The search is done approximately equally in all directions. This approach asserts that the first solution that gets X I to X G is the shortest solution that makes use of least number of steps. This does not keep account of states that have been visited previously. This is said to be a systematic approach. (3)

Example An example of the algorithm, seen in Figure 2, running a breadth search first search, using a FIFO queue is seen in Figure 4. For this example, the order in which U(x) is performed is move down, move left, move up, move right 11 8 12 14 16 20 25 29 7 4 19 24 28 3 q i 21 q g 6 2 17 22 26 9 5 10 13 15 18 23 27 Figure 4: Breadth First Search Using a FIFO queue. 1. q i in queue. Run algorithm on q i. add 2 then 3 and then 4 to the queue. Q: {4,3,2} 2. Take the first item off of the queue (2) add 5 then 6 to the queue. Q: {6,5,4,3} 3. Take the first item off of the queue (3) add 7 to the queue. Q: {7,6,5,4} 4. Take the first item off of the queue (4) add 8 to the queue. Q: {8,7,6,5} 5. Take the first item off of the queue (5) add 9 then 10 to the queue. Q: {10,9,8,7,6} 6. Take the first item off of the queue (6) Q: {10,9,8,7} 8. Take the first item off of the queue (8) add 12 to the queue. Q: {12,11,10,9} 9. Take the first item off of the queue (9) Q: {12,11,10} 10. Take the first item off of the queue (10) add 13 to the queue. Q: {13,12,11} 11. Take the first item off of the queue (11) 12. Q: {13,12} Take the first item off of the queue (9) 13. This process is continued until the goal is removed from the queue. This occurs after step 25, where 29 is added to the queue. 7. Take the first item off of the queue (7) add 11 to the queue. Q: {11,10,9,8} After step 29 the items in the queue are: Q:{29,28,27,26,q g} At this point, q g is removed and the algorithm returns success. The way to compute the path is to keep track of the states parents, go from child to parent, and flip it to get the path. Each child has one parent but each parent can have multiple children

It is important to note that the path found happens to be the optimal path, even though almost the entire space needed to be explored. Although most of the space was explored, not all of it needed to be. The path traversed slowly, but was able to provide a solution. In order to return the path, the parent of each cell (which cell was used to get to which cell) was stored. Explanation DEPTH FIRST Depth first search utilizes a LIFO type queue. In a LIFO queue the last element that is added is the first element that is removed. This queue structure is similar to a stack of dishes, where the last dish added to the stack is the first dish to be removed. This algorithm grows a sub-tree in one directions and branches out until it reaches a dead end. At this point the sub tree back tracks and continues a new sub-tree from the point closest to the end. Like the breadth first search, the depth first search stops when it reaches the goal state. This algorithm can be fast in reaching the goal, but the path will generally be fairly long. This method will generally look something like a cone, as seen in Figure 5. Figure 5: General shape of a breadth first search. The search is done almost exclusively in a single direction. In the depth first approach, the Q functions as a stack. The last state to enter Q is the first one to leave. Here too, the revisited states are not accounted for. The algorithm proceeds in a for all loop and hence there is no control on the path taken. This approach is desirable when the count of X is finite. (3) Example An example of the algorithm, seen in Figure 2, running a depth search first search, using a LIFO queue is seen below. For this example, seen in Figure 6Figure 4, the order in which U(x) is performed is move up, move right, move down, and move right. 7 6 8 9 10 12 14 16 5 4 11 13 15 17 3 q i q g 20 18 2 21 19 Figure 6: Depth First Search using a LIFO queue.

It is important to note that the path found was found in half the steps of the breadth first search, even though the path found was far from the optimal path. Although around half of the space was explored, this search method could easily end up exploring too far in one direction and being incredibly slow. Had the space in Figure 6 extended infinitely to the right, the path would have never encountered the goal. In the case of the breadth first search, even if the space was infinite, the path returned would still be the same as that seen in Figure 4. The path traversed quickly, but was unable to provide a near optimal solution. As in breadth first searching, the parent of each cell (which cell was used to get to which cell) was stored to allow the recovery of the final path. 1. qi in queue. Run algorithm on qi. add 2 then 3 and then 4 to the queue. Q: {4,3,2} 2. Take the last item off of the queue (4) add 5 then 6 to the queue. Q: {6,5,3,2} 3. Take the last item off of the queue (6) add 7 then 8 to the queue. Q: {8,7,5,3,2} 4. Take the last item off of the queue (8) add 9 to the queue. Q: {9,7,5,3,2} 5. Take the last item off of the queue (9) add 10 to the queue. Q: {10,7,5,3,2} 6. Take the last item off of the queue (10) Add 11 then 12 to the queue Q: {12,11,7,5,3,2} 7. Take the last item off of the queue (12) add 13 then 14 to the queue. Q: {14,13,11,7,5,3,2} 8. Take the last item off of the queue (14) add 15 then 16 to the queue. Q: {16,15,13,11,7,5,3,2} 9. Take the last item off of the queue (16) Add 17 to the queue Q: {17,15,13,11,7,5,3,2} 10. Take the last item off of the queue (17) add 18 to the queue. Q: {18,15,13,11,7,5,3,2} 11. Take the last item off of the queue (18) add 19 then 20 to the queue. Q: {20,19,15,13,11,7,5,3,2} 12. Take the last item off of the queue (20) add 21 then qg to the queue. Q: {qg,21,19,15,13,11,7,5,3,2} 13. Take the first item off of the queue (qg) return success.

OPTIMAL PLANNING ALGORITHMS Although Breadth and Depth First Search methods both find a path to the object, Breadth First Search is slow and Depth First search doesn t find an efficient path. By changing the queue to select an element based on a certain criteria, the problem can be solved faster, with an optimal result. Now, a cost, C, is created for all values of x and u. The cost defined here could be an action cost or the amount of energy that is needed to go from x to x. Cost can also be determined by the distance to the nearest obstacle. Consider for example, a robot that has a noisy actuator. The goal would be to make sure it doesn t bump into obstacles. For every configuration, the distance to the nearest obstacle could be the cost-to-go. Finally, given a cost, the goal is to find a path such that the sum of the cost along a path, C(x), is minimized. This cost is created using a function, L(x, u), which takes in any x and produces the cost to perform an action U(x). Common L(x, u) functions are based on the distance to obstacles and the energy cost to make the given move. The goal of an optimal planning algorithm is to find a path such that C(x) for the entire path is minimized. DIJKSTRA S ALGORITHM For Dijkstra s Algorithm, the queue is sorted based on the estimated cost-to-come from x i to x, and is given by C(x). The goal of the algorithm is to find C*(x g ), or the minimal cost to get to the goal. This algorithm tries to find the optimum and feasible path. This is useful for finding the shortest path for a single source. Each and every edge (e ϵ E) can be thought of as a separate planning problem. Each edge has a cost, L(e), associated with it when an action is applied. The cost of applying an action, u, from state x is represented as L(x,u). The total cost is computed as the sum of costs for all edges starting from X I and going to X G. There are two types of costs, namely C(x) i.e. non optimal cost, and optimal cost C*(x). Q is sorted according to the cost-to-come function. For each and every state, C*(x) is defined by the optimal cost-to-come function. C*(x) = the sum of all edge cost values divided by the sum of path from x I to x. (3) The algorithm starts off with no estimate of the cost-to-come value (i.e. Initially C*(x) = 0), it then incrementally builds the cost-to-come function until it converges to the optimal cost. This uses the forward search algorithm and makes one modification to it. When a node x is taken, it is created by f(x, u). If x is not visited, then it is put into Q and the Q is sorted based on the cost-to-come. The cost to-come function is updated as the current cost-to-come, plus the cost that it takes to go from x to x (i.e. C(x ) = C*(x) + L(e)). C(x ) is known as the best cost-to-come. Updating the cost function takes place. (3) If a node is found that has already been visited, the cost is recalculated and compared to the originally listed cost. The minimum of these two compared costs is the new C(x ). In this algorithm the most computationally complicated step is the calculation of the C(x). A theorem exists pertaining to this algorithm that states that a state is only popped when its optimal cost is computed. This means that when the goal is popped that it is at its optimal cost. In other words, when a state becomes a dead state, the cost function associated with it is called the optimal cost-to-come. HEURISTIC PATH A heuristic path can be defined when the final destination is known. Dijkstra s algorithm can be modified when the final destination is known. In the example below (Figure 7), the robot is required to move from X I to X G and the blocks are the obstacles on the way. A function G is defined such that

G: x R +. G must underestimate the true cost-to-go. The path traversed by the robot when all the obstacles are removed is called the heuristic path. The associated cost is called the heuristic cost-togo. X I X G Figure 7: Example of a heuristic path A* ALGORITHM The A* algorithm computes the total cost that is used to get to the goal state and reduces the number of explored states. There are two main cost functions that are to be defined here: the cost-to-come, which is the cost C(x) from X I to x, and G(x), the cost-to-go from x to X G. The optimal cost-to-come, C*, and optimal cost-to-go, G*, differ in that C* can be estimated by incrementing it periodically but G* cannot be estimated. The goal is to compute an estimate (Ĝ*(x)) that is close enough and less than or equal to the original cost-to-go estimate. To accomplish this a heuristic path is chosen which is a straight path connecting the start to the end. This line is usually one which contains obstacles hence the cost-to-go function will be higher than this estimate. The contents of Q are sorted using the sum of C*(x ) + Ĝ*(x ). If Ĝ*(x ) is 0 the Dijkstra s algorithm will be implemented. In A*, Q is sorted based on the heuristic cost-to-go C(x) + G(x). It can be written as C(x) + ϵg(x). When ϵ = 0: Dijkstras algorithm i.e. C(x) When ϵ= : Breadth first algorithm i.e. G(x) When ϵ=1: A* algorithm i.e. C(x) + G(x) When ϵ=1.5: weighted A* i.e. C(x) + 1.5*G(x) This is no longer a heuristic pathway One disadvantage of the A* algorithm in comparison to best first algorithm is that it is slower. BEST FIRST SEARCH This is similar to A* wherein it is required to compute an estimate of the optimal cost-to-go function. In some cases however, this estimate exceeds the optimal cost-to-go function. The disadvantage of this system is that sometimes optimal solutions are not found. The advantage is that the execution and running times are much faster since fewer vertices are visited. This is a non-systematic search. ITERATIVE DEEPENING The iterative deepening method is used when the number of vertices in the next level is far more than the number in the current level. It does a breadth first search in a depth first way. It shoots out rays and stops till it reaches a particular depth. It is a cautious and efficient algorithm. This uses the depth first search to find all the states that are at a certain distance, let s say n (or less), from the initial state. If the goal is not found it starts over and searches for n+1 (or less). It starts from n=1 and proceeds

until the goal is reached. This algorithm doesn t have the space constraints that the breadth first algorithm has. V. REFERENCES 1. Lu Yang, Zhenbing Zeng, Symbolic Solution of a Piano Movers Problem with Four Parameters, In proceeding of: Automated Deduction in Geometry, 5th International Workshop ADG, Gainesville, 2004. 2. Piotr Indyk, Lecture 11: Motion Planning by, http://www.cs.arizona.edu/classes/cs437/fall12/lec11.prn.pdf, March 2005 3. Steven M. LaValle, (2006), Planning Algorithms, Chicago: Cambridge University Press