Beyond Classical Search

Similar documents
Beyond Classical Search

Introduction to Artificial Intelligence 2 nd semester 2016/2017. Chapter 4: Beyond Classical Search

Beyond Classical Search: Local Search. CMPSCI 383 September 23, 2011

Hill Climbing. Assume a heuristic value for each assignment of values to all variables. Maintain an assignment of a value to each variable.

CS 331: Artificial Intelligence Local Search 1. Tough real-world problems

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

BEYOND CLASSICAL SEARCH

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Artificial Intelligence

Vorlesung Grundlagen der Künstlichen Intelligenz

Artificial Intelligence

Outline. Best-first search. Greedy best-first search A* search Heuristics Local search algorithms

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 23 January, 2018

3.6.2 Generating admissible heuristics from relaxed problems

Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat:

Contents. Foundations of Artificial Intelligence. General Algorithm. Best-First Search

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Artificial Intelligence

mywbut.com Informed Search Strategies-II

Non-deterministic Search techniques. Emma Hart

TDDC17. Intuitions behind heuristic search. Best-First Search. Recall Uniform-Cost Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Foundations of Artificial Intelligence

Gradient Descent. 1) S! initial state 2) Repeat: Similar to: - hill climbing with h - gradient descent over continuous space

TDDC17. Intuitions behind heuristic search. Recall Uniform-Cost Search. Best-First Search. f(n) =... + h(n) g(n) = cost of path from root node to n

CS:4420 Artificial Intelligence

Recap Hill Climbing Randomized Algorithms SLS for CSPs. Local Search. CPSC 322 Lecture 12. January 30, 2006 Textbook 3.8

Foundations of Artificial Intelligence

Introduction to Computer Science and Programming for Astronomers

CMU-Q Lecture 8: Optimization I: Optimization for CSP Local Search. Teacher: Gianni A. Di Caro

Administrative. Local Search!

Introduction to Design Optimization: Search Methods

Artificial Intelligence p.1/49. n-queens. Artificial Intelligence p.2/49. Initial state: the empty board or a board with n random

Local Search (Ch )

Ar#ficial)Intelligence!!

Artificial Intelligence, CS, Nanjing University Spring, 2016, Yang Yu. Lecture 5: Search 4.

Advanced A* Improvements

2006/2007 Intelligent Systems 1. Intelligent Systems. Prof. dr. Paul De Bra Technische Universiteit Eindhoven

Artificial Intelligence Lecture 6

INF Biologically inspired computing Lecture 1: Marsland chapter 9.1, Optimization and Search Jim Tørresen

Solving Problems: Intelligent Search

Machine Learning for Software Engineering

Artificial Intelligence

N-Queens problem. Administrative. Local Search

Informed Search and Exploration

TDT4136 Logic and Reasoning Systems

Solving Problems by Searching

Lecture 9. Heuristic search, continued. CS-424 Gregory Dudek

Beyond Classical Search

Informed Search. Dr. Richard J. Povinelli. Copyright Richard J. Povinelli Page 1

Downloaded from ioenotes.edu.np

Robot Programming with Lisp

Lecture 4: Local and Randomized/Stochastic Search

Informed search algorithms. Chapter 4

Artificial Intelligence

Recap Randomized Algorithms Comparing SLS Algorithms. Local Search. CPSC 322 CSPs 5. Textbook 4.8. Local Search CPSC 322 CSPs 5, Slide 1

Informed Search and Exploration

Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

ITCS 6150 Intelligent Systems. Lecture 6 Informed Searches

Informed search algorithms. Chapter 4

CS 188: Artificial Intelligence Spring Today

Planning and Search. Genetic algorithms. Genetic algorithms 1

Constraint Satisfaction Problems (CSPs) Lecture 4 - Features and Constraints. CSPs as Graph searching problems. Example Domains. Dual Representations

x n+1 = x n f(x n) f (x n ), (1)

March 19, Heuristics for Optimization. Outline. Problem formulation. Genetic algorithms

Simulated Annealing. Slides based on lecture by Van Larhoven

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Review Adversarial (Game) Search ( ) Review Constraint Satisfaction ( ) Please review your quizzes and old CS-271 tests

Artificial Intelligence

CS 188: Artificial Intelligence

What is Search For? CSE 473: Artificial Intelligence. Example: N-Queens. Example: N-Queens. Example: Map-Coloring 4/7/17

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Artificial Intelligence. Sina Institute, University of Birzeit

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit

ARTIFICIAL INTELLIGENCE. Informed search

CS 188: Artificial Intelligence Spring Announcements

Single Candidate Methods

Informed Search. Topics. Review: Tree Search. What is Informed Search? Best-First Search

Local Search Methods. CS 188: Artificial Intelligence Fall Announcements. Hill Climbing. Hill Climbing Diagram. Today

AI Programming CS S-08 Local Search / Genetic Algorithms

Searching. Assume goal- or utilitybased. Next task to achieve is to determine the best path to the goal

Uninformed Search Methods. Informed Search Methods. Midterm Exam 3/13/18. Thursday, March 15, 7:30 9:30 p.m. room 125 Ag Hall

An evolutionary annealing-simplex algorithm for global optimisation of water resource systems

Solving Problems: Blind Search

Announcements. CS 188: Artificial Intelligence Fall Reminder: CSPs. Today. Example: 3-SAT. Example: Boolean Satisfiability.

CS 188: Artificial Intelligence Fall 2008

Today. CS 188: Artificial Intelligence Fall Example: Boolean Satisfiability. Reminder: CSPs. Example: 3-SAT. CSPs: Queries.

Wissensverarbeitung. - Search - Alexander Felfernig und Gerald Steinbauer Institut für Softwaretechnologie Inffeldgasse 16b/2 A-8010 Graz Austria

Lecture 4. Convexity Robust cost functions Optimizing non-convex functions. 3B1B Optimization Michaelmas 2017 A. Zisserman

Artificial Intelligence Informed search. Peter Antal Tadeusz Dobrowiecki

Lecture: Iterative Search Methods

Chapter 3 Solving problems by searching

Algorithms & Complexity

Genetic Algorithms Variations and Implementation Issues

Research Incubator: Combinatorial Optimization. Dr. Lixin Tao December 9, 2003

Artificial Intelligence (Heuristic Search)

Uninformed Search. Problem-solving agents. Tree search algorithms. Single-State Problems

Informed Search. Best-first search. Greedy best-first search. Intelligent Systems and HCI D7023E. Romania with step costs in km

Expert Systems (Graz) Heuristic Search (Klagenfurt) - Search -

Announcements. CS 188: Artificial Intelligence Fall Robot motion planning! Today. Robotics Tasks. Mobile Robots

CS 188: Artificial Intelligence Fall Announcements

Transcription:

Beyond Classical Search Chapter 3 covered problems that considered the whole search space and produced a sequence of actions leading to a goal. Chapter 4 covers techniques (some developed outside of AI) that don t try to cover the whole space and only the goal state, not the steps, are important. The techniques of Chapter 4 tend to use much less memory and are not guaranteed to find an optimal solution. 1

More Search Methods Local Search Hill Climbing Simulated Annealing Beam Search Genetic Search Local Search in Continuous Spaces Searching with Nondeterministic Actions Online Search (agent is executing actions) 2

Local Search Algorithms and Optimization Problems Complete state formulation For example, for the 8 queens problem, all 8 queens are on the board and need to be moved around to get to a goal state Equivalent to optimization problems often found in science and engineering Start somewhere and try to get to the solution from there Local search around the current state to decide where to go next 3

Pose Estimation Example Given a geometric model of a 3D object and a 2D image of the object. Determine the position and orientation of the object wrt the camera that snapped the image. image 3D object State (x, y, z, θx, θy, θz) 4

Hill Climbing Gradient ascent solution Note: solutions shown here as max not min. Often used for numerical optimization problems. How does it work? In continuous space, the gradient tells you the direction in which to move uphill. 5

AI Hill Climbing Steepest-Ascent Hill Climbing current start node loop do neighbor a highest-valued successor of current if neighbor.value <= current.value then return current.state current neighbor end loop At each step, the current node is replaced by the best (highest-valued) neighbor. This is sometimes called greedy local search. 6

Hill Climbing Search current 6 4 10 3 2 8 What if current had a value of 12? 7

Hill Climbing Problems Local maxima Plateaus Diagonal ridges What is it sensitive to? Does it have any advantages? 8

Solving the Problems Allow backtracking (What happens to complexity?) Stochastic hill climbing: choose at random from uphill moves, using steepness for a probability Random restarts: If at first you don t succeed, try, try again. Several moves in each of several directions, then test Jump to a different part of the search space 9

Simulated Annealing Variant of hill climbing (so up is good) Tries to explore enough of the search space early on, so that the final solution is less sensitive to the start state May make some downhill moves before finding a good way to move uphill. 10

Simulated Annealing Comes from the physical process of annealing in which substances are raised to high energy levels (melted) and then cooled to solid state. heat cool The probability of moving to a higher energy state, instead of lower is p = e^(- E/kT) where E is the positive change in energy level, T is the temperature, and k is Bolzmann s constant. 11

Simulated Annealing At the beginning, the temperature is high. As the temperature becomes lower kt becomes lower E/kT gets bigger (- E/kT) gets smaller e^(- E/kT) gets smaller As the process continues, the probability of a downhill move gets smaller and smaller. 12

For Simulated Annealing E represents the change in the value of the objective function. Since the physical relationships no longer apply, drop k. So p = e^(- E/T) We need an annealing schedule, which is a sequence of values of T: T 0, T 1, T 2,... 13

Simulated Annealing Algorithm current start node; for each T on the schedule /* need a schedule */ next randomly selected successor of current evaluate next; it it s a goal, return it E next.value current.value /* already negated */ if E > 0 then current next /* better than current */ else current next with probability e^( E/T) How would you do this probabilistic selection? 14

Probabilistic Selection Select next with probability p 0 p 1 random number Generate a random number If it s <= p, select next 15

Simulated Annealing Properties At a fixed temperature T, state occupation probability reaches the Boltzman distribution p(x) = αe^(e(x)/kt) If T is decreased slowly enough (very slowly), the procedure will reach the best state. Slowly enough has proven too slow for some researchers who have developed alternate schedules. 16

Simulated Annealing Schedules Acceptance criterion and cooling schedule 17

Simulated Annealing Applications Basic Problems Traveling salesman Graph partitioning Matching problems Graph coloring Scheduling Engineering VLSI design Placement Routing Array logic minimization Layout Facilities layout Image processing Code design in information theory 18

Local Beam Search Keeps more previous states in memory Simulated annealing just kept one previous state in memory. This search keeps k states in memory. - randomly generate k initial states - if any state is a goal, terminate - else, generate all successors and select best k - repeat 19

Local Beam Search 20

Genetic Algorithms Start with random population of states Representation serialized (ie. strings of characters or bits) States are ranked with fitness function Produce new generation Select random pair(s) using probability: probability ~ fitness Randomly choose crossover point Offspring mix halves Randomly mutate bits Crossover Mutation 174629844710 174611094281 776511094281 776529844710 164611094281 776029844210 21

Genetic Algorithm Given: population P and fitness-function f repeat newp empty set for i = 1 to size(p) x RandomSelection(P,f) y RandomSelection(P,f) child Reproduce(x,y) if (small random probability) then child Mutate(child) add child to newp P newp until some individual is fit enough or enough time has elapsed return the best individual in P according to f 22

Using Genetic Algorithms 2 important aspects to using them 1. How to encode your real-life problem 2. Choice of fitness function Research Example I have N variables V1, V2,... VN I want to produce a single number from them that best satisfies my fitness function F I tried linear combinations, but that didn t work A guy named Stan I met at a workshop in Italy told me to try Genetic Programming 23

Genetic Programming Like genetic algorithm, but instead of finding the best character string, we want to find the best arithmetic expression tree The leaves will be the variables and the non-terminals will be arithmetic operators It uses the same ideas of crossover and mutation to produce the arithmetic expression tree that maximizes the fitness function. 24

Tree structure for quantifying midface hypoplasia Xi are the selected histogram bins from an azimuthelevation histogram of the surface normals of the face. 25

Local Search in Continuous Spaces Given a continuous state space S = {(x 1,x 2,,x N ) x i ε R} Given a continuous objective function f(x 1,x 2,,x N ) The gradient of the objective function is a vector f = ( f/ x 1, f/ x 2,, f/ x N ) The gradient gives the magnitude and direction of the steepest slope at a point. 26

Local Search in Continuous Spaces To find a maximum, the basic idea is to set f =0 Then updating of the current state becomes x x + α f(x) where α is a small constant. Theory behind this is taught in numerical methods classes. Your book suggests the Newton-Raphson method. Luckily there are packages.. 27

Computer Vision Pose Estimation Example I have a 3D model of an object and an image of that object. I want to find the pose: the position and orientation of the camera. pose from 6 point correspondences pose from ellipsecircle correspondence pose from both 6 points and ellipse-circle correspondences 28

Computer Vision Pose Estimation Example Initial pose from points/ellipses and final pose after optimization. The optimization was searching a 6D space: (x,y,z,θx,θy,θz) The fitness function was how well the projection of the 3D object lined up with the edges on the image. 29

Searching with Nondeterministic Actions Vacuum World (actions = {left, right, suck}) 30

Searching with Nondeterministic Actions In the nondeterministic case, the result of an action can vary. Erratic Vacuum World: When sucking a dirty square, it cleans it and sometimes cleans up dirt in an adjacent square. When sucking a clean square, it sometimes deposits dirt on the carpet. 31

Generalization of State-Space Model 1. Generalize the transition function to return a set of possible outcomes. oldf: S x A -> S newf: S x A -> 2 S 2. Generalize the solution to a contingency plan. if state=s then action-set-1 else action-set-2 3. Generalize the search tree to an AND-OR tree. 32

AND Node AND-OR Search Tree must check both agent chooses OR Node same as an ancestor 33

Searching with Partial Observations The agent does not always know its state! Instead, it maintains a belief state: a set of possible states it might be in. Example: a robot can be used to build a map of a hostile environment. It will have sensors that allow it to see the world. 34

Belief State Space for Sensorless Agent Knows it s on the left initial state Knows it s on the right.? Knows its side is clean. Knows left side clean 35

Online Search Problems Active agent executes actions acquires percepts from sensors deterministic and fully observable has to perform an action to know the outcome Examples Web search Autonomous vehicle 36