Artificial Intelligence

Similar documents
Introduction to Artificial Intelligence 2 nd semester 2016/2017. Chapter 4: Beyond Classical Search

BEYOND CLASSICAL SEARCH

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Artificial Intelligence

Ar#ficial)Intelligence!!

CS 331: Artificial Intelligence Local Search 1. Tough real-world problems

Outline. Best-first search. Greedy best-first search A* search Heuristics Local search algorithms

Hill Climbing. Assume a heuristic value for each assignment of values to all variables. Maintain an assignment of a value to each variable.

Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat:

Non-deterministic Search techniques. Emma Hart

Artificial Intelligence

Artificial Intelligence

3.6.2 Generating admissible heuristics from relaxed problems

CS:4420 Artificial Intelligence

Local Search. CS 486/686: Introduction to Artificial Intelligence Winter 2016

Beyond Classical Search: Local Search. CMPSCI 383 September 23, 2011

TDDC17. Intuitions behind heuristic search. Recall Uniform-Cost Search. Best-First Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Algorithm Design (4) Metaheuristics

Beyond Classical Search

mywbut.com Informed Search Strategies-II

Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

Local Search. CS 486/686: Introduction to Artificial Intelligence

Local Search (Ch )

Beyond Classical Search

TDDC17. Intuitions behind heuristic search. Best-First Search. Recall Uniform-Cost Search. f(n) =... + h(n) g(n) = cost of path from root node to n

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

AI Programming CS S-08 Local Search / Genetic Algorithms

Outline. Informed Search. Recall: Uninformed Search. An Idea. Heuristics Informed search techniques More on heuristics Iterative improvement

Gradient Descent. 1) S! initial state 2) Repeat: Similar to: - hill climbing with h - gradient descent over continuous space

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 23 January, 2018

Random Search Report An objective look at random search performance for 4 problem sets

Midterm Examination CS540-2: Introduction to Artificial Intelligence

Lecture 9. Heuristic search, continued. CS-424 Gregory Dudek

Introduction to Computer Science and Programming for Astronomers

Escaping Local Optima: Genetic Algorithm

CMU-Q Lecture 8: Optimization I: Optimization for CSP Local Search. Teacher: Gianni A. Di Caro

Artificial Intelligence p.1/49. n-queens. Artificial Intelligence p.2/49. Initial state: the empty board or a board with n random

Artificial Intelligence

Suppose you have a problem You don t know how to solve it What can you do? Can you use a computer to somehow find a solution for you?

SRI VIDYA COLLEGE OF ENGINEERING & TECHNOLOGY REPRESENTATION OF KNOWLEDGE PART A

Informed Search. Topics. Review: Tree Search. What is Informed Search? Best-First Search

Uninformed Search Methods. Informed Search Methods. Midterm Exam 3/13/18. Thursday, March 15, 7:30 9:30 p.m. room 125 Ag Hall

Recap Hill Climbing Randomized Algorithms SLS for CSPs. Local Search. CPSC 322 Lecture 12. January 30, 2006 Textbook 3.8

Artificial Intelligence

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

2006/2007 Intelligent Systems 1. Intelligent Systems. Prof. dr. Paul De Bra Technische Universiteit Eindhoven

ARTIFICIAL INTELLIGENCE. Informed search

Lecture 4. Convexity Robust cost functions Optimizing non-convex functions. 3B1B Optimization Michaelmas 2017 A. Zisserman

CS 188: Artificial Intelligence

Informed search algorithms. Chapter 4

What is Search For? CSE 473: Artificial Intelligence. Example: N-Queens. Example: N-Queens. Example: Map-Coloring 4/7/17

Artificial Intelligence

Evolutionary Computation Algorithms for Cryptanalysis: A Study

Informed search algorithms. Chapter 4

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Artificial Intelligence. Sina Institute, University of Birzeit

Downloaded from ioenotes.edu.np

Algorithms & Complexity

Introduction to Design Optimization: Search Methods

Administrative. Local Search!

Solving Problems using Search

TDT4136 Logic and Reasoning Systems

Introduction (7.1) Genetic Algorithms (GA) (7.2) Simulated Annealing (SA) (7.3) Random Search (7.4) Downhill Simplex Search (DSS) (7.

Solving Problems: Intelligent Search

Advanced A* Improvements

DFS. Depth-limited Search

Introduction to Genetic Algorithms. Based on Chapter 10 of Marsland Chapter 9 of Mitchell

Data Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Vorlesung Grundlagen der Künstlichen Intelligenz

N-Queens problem. Administrative. Local Search

Midterm Examination CS 540-2: Introduction to Artificial Intelligence

Predicting Diabetes using Neural Networks and Randomized Optimization

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit

Single Candidate Methods

Recap Randomized Algorithms Comparing SLS Algorithms. Local Search. CPSC 322 CSPs 5. Textbook 4.8. Local Search CPSC 322 CSPs 5, Slide 1

Introduction to Optimization

Introduction to Optimization

University of Waterloo Department of Electrical and Computer Engineering ECE 457A: Cooperative and Adaptive Algorithms Midterm Examination

x n+1 = x n f(x n) f (x n ), (1)

Informed Search Algorithms. Chapter 4

Iterative improvement algorithms. Today. Example: Travelling Salesperson Problem. Example: n-queens

Informed Search. Dr. Richard J. Povinelli. Copyright Richard J. Povinelli Page 1

Foundations of AI. 4. Informed Search Methods. Heuristics, Local Search Methods, Genetic Algorithms

Wissensverarbeitung. - Search - Alexander Felfernig und Gerald Steinbauer Institut für Softwaretechnologie Inffeldgasse 16b/2 A-8010 Graz Austria

Searching. Assume goal- or utilitybased. Next task to achieve is to determine the best path to the goal

Chapter 14 Global Search Algorithms

Artificial Intelligence Informed search. Peter Antal Tadeusz Dobrowiecki

Local Search. (Textbook Chpt 4.8) Computer Science cpsc322, Lecture 14. May, 30, CPSC 322, Lecture 14 Slide 1

Informed Search and Exploration

Machine Evolution. Machine Evolution. Let s look at. Machine Evolution. Machine Evolution. Machine Evolution. Machine Evolution

Advanced Search Genetic algorithm

INF Biologically inspired computing Lecture 1: Marsland chapter 9.1, Optimization and Search Jim Tørresen

Artificial Intelligence (Heuristic Search)

Foundations of AI. 4. Informed Search Methods. Heuristics, Local Search Methods, Genetic Algorithms. Wolfram Burgard & Bernhard Nebel

Optimization Techniques for Design Space Exploration

Expert Systems (Graz) Heuristic Search (Klagenfurt) - Search -

Artificial Intelligence Lecture 6

Transcription:

Artificial Intelligence Informed Search and Exploration Chapter 4 (4.3 4.6) Searching: So Far We ve discussed how to build goal-based and utility-based agents that search to solve problems We ve also presented both uninformed (or blind) and informed (or heuristic) approaches for search What we ve covered so far explore search space systematically, which could enumerate the entire state space before finding a solution Last Time: Search Strategies This Time Uninformed: Use only information available in the problem formulation Breadth-first Uniform-cost Depth-first Depth-limited Iterative deepening Local search algorithms Hill climbing Simulated annealing Genetic algorithm Informed: Use heuristics to guide the search Best first: Greedy search A* search Local Search Algorithms Optimization Other search strategies, can use local search algorithms: keep a single current state, and try to improve it, in order to solve the problem at hand The path is irrelevant; the goal state itself is the solution. We do not necessarily have a designated start state The objective is to search through the problem space to find other solutions that are better, the best, or that meet certain criteria (goal) Problems where we search through complete solutions to find the best solution are often referred to as optimization problems Most optimization tasks belong to a class of computational problems called NP Non-deterministic Polynomial time solvable For NP problems, state spaces are usually exponential, so systematic search methods are not time or space efficient 1

Optimization Problems Optimization Problems As it turns out, many real-world problems that we might want an agent to solve are similarly hard optimization problems: Bin-packing - The goal is to pack a collection of objects into the minimum number of fixed-size "bins" Logistics planning VLSI layout/circuit design Theorem-proving Navigation/routing Production scheduling Learning the parameters for a neural network For optimization problems, there is a well-defined objective function that we are trying to optimize In addition to finding goals, local search algorithms are useful for solving pure optimization problems, in which the aim is to find the best state according to an objective function Local Search Local search Local search is a type of greedy, complete search that focuses on a specific (or local) part of the search space, rather than trying to branch out into all of it We only consider the neighborhood of the current state rather than the entire state space (multiple paths) Consider the state space landscape A landscape has both "location" (defined by the state) and "elevation" (defined by the value of the heuristic cost function or objective function) If elevation corresponds to cost, then the aim is to find the lowest valley a global minimum If elevation corresponds to an objective function, then the aim is to find the highest peak a global maximum (You can convert from one to the other just by inserting a minus sign.) Local search Hill-Climbing (HC) Local search algorithms explore this landscape A complete local search algorithm always finds a goal if one exists An optimal algorithm always finds a global minimum/maximum The most common local search strategy is called hillclimbing, if the task is to maximize the objective function Called gradient ascent if we are maximizing else Called gradient descent if we are minimizing We consider all the successors of the current node, expand the best one, and throw the rest away 2

Hill-Climbing Hill-Climbing The hill-climbing search algorithm is simply a loop that continually moves in the direction of increasing value that is, uphill It terminates when it reaches a "peak" where no neighbor has a higher value The algorithm does not maintain a search tree, so the current node data structure need only record the state and its objective function value Hill-climbing does not look ahead beyond the immediate neighbors of the current state Hill climbing is sometimes called greedy local search Perform quite well Hill climbing often makes very rapid progress towards a solution, because it is usually quite easy to improve a bad state Hill-Climbing Hill-Climbing Issues Example: Hill-climbing search 1. Pick a solution from the search space and evaluate it. Define this as the current solution. 2. Generate a new solution by applying a transformation to the current solution and evaluate it. 3. If the new solution is better than the current then make it the new current solution; otherwise discard it. 4. Repeat steps 2 and 3 until there are no more possible transformations. Often gets stuck for the following reasons: Local maxima Ridges Plateau (...sideways moves... Objective Surfaces Objective Surface The objective surface is a plot of the objective function s landscape The various levels of optimality can be seen on the objective surface 3

Escaping Local Optima Random Restarting Local optima are OK, but sometimes we want to find the absolute best solution There are several ways we can try to avoid local optima and find more globally optimal solutions: Random Restarting Simulated Annealing If at first you don t succeed, try, try again! The idea here is to run the standard HC search algorithm several times, each with different, randomized initial states Since HC is a local search strategy, trying multiple initial states allows us to locally explore a wider range of the search space Random Restarting Random Restarting If we pick lucky initial states, we can find the global optimum! It turns out that, if each HC run has a probability p of success, the number of restarts needed is approximately 1/p For example, with 8-Queens, there is a probability of success p» 0.14»1/0.14 So, on average, we would need only 7 randomly initialized trails of the basic HC search to find a solution Simulated Annealing (SA) Simulated Annealing In metallurgy, annealing is the process used to temper or harden metals and glass by heating them to a high temperature and then gradually cooling them, thus allowing the material to coalesce into a low-energy crystalline state Builds on an analogy with thermodynamics. The Boltzmann probability distribution describes the relative probabilities of finding a system in different states as a function of temperature Let us consider to switch from HC to gradient descent (minimizing the cost) 4

Simulated Annealing Simulated Annealing According to thermodynamics, to grow a crystal: Start by heating a row of materials in a molten state The crystal melt is cooled If the temperature is reduced too quickly, irregularities occur and it does not reach its ground state By analogy, SA relies on a good cooling schedule, which maps the current time to a temperature T, to find the optimal solution Usually exponential Can be very difficult to devise Imagine the task of getting a ping pong ball into the deepest crevice in a bumpy surface If we let the ball roll come to rest in a local minimum The simulated annealing solution is to start shaking hard (i.e. at a high temperature) and then gradually reduce the intensity of the shaking (i.e. lower the temperature) Simulated Annealing Simulated Annealing 1) In some way, the simulated-annealing algorithm is similar to hill climbing. In stead of picking the best move, it picks a random move If improve the situation always accepted Else it accepts the situation with some probability less then 1 The probability decreases exponentially with the badness of the move the amount the evaluation is worsened Requirements for simulated annealing: A description of possible system states (representation). A generator of random changes in the configuration (search operator). An evaluation function E (analog of energy) for minimization. A parameter T (analog of temperature) and an annealing schedule. The probability also decreases as the temperature T goes down Genetic Algorithms Evolutionary Analogy So far the optimization strategies we ve discussed search for a single solution, one state at a time within a neighborhood Genetic algorithms (GAs) are a unique search approach that maintains a population of states, or individuals, which evolves Also called evolutionary search Consider a population of rabbits: Some individuals are faster and smarter than others Slower, dumber rabbits are likely to be caught and eaten by foxes Fast, smart rabbits survive to do what rabbits to best: make more rabbits!! 5

Evolutionary Analogy Evolutionary Analogy The rabbits that survive breed with each other to generate offspring, which starts to mix up their genetic material Fast rabbits might breed with fast rabbits Fast rabbits with slow rabbits Smart with not-so-smart, etc Furthermore, nature occasionally throws in a wild hare because genes can mutate In this analogy, an individual rabbit represents a solution to the problem (i.e. a single point in the state space) The state description is its DNA, if you will The foxes represent the problem constraints Solutions that do well are likely to survive What we need to create are notions of natural selection, reproduction, and mutation Core Elements of GAs Genetic Algorithm Example For selection, we use a fitness function to rank the individuals of the population For reproduction, we define a crossover operator which takes state descriptions of individuals and combines them to create new ones For mutation, we can merely choose individuals in the population and alter part of its state POP = initialpopulation // build a new population repeat { // with every generation NEW_POP = empty for I = 1 to POP_SIZE { X = fit individual // natural selection Y = fit individual CHILD = crossover(x,y) // reproduction if small random probability then mutate(child) // mutation add CHILD to NEW_POP } POP = NEW_POP } until solution found or enough time elapsed return most fit individual in POP Genetic Algorithm Example Selection The previous algorithm completely replaces the population for each new generation but we can allow individuals from older generations to live on Reproduction here is only between two parents (as in nature), but we can allow for more!! The population size also is fixed but we can have this vary from one generation to the next Selection (either to reproduce or live on) from one generation to the next relies on the fitness function We can usually think of the fitness function as being a heuristic, or the objective function We want to apply pressure that good solutions survive and bad solutions die Too much and we converge to sub-optimal solutions Too little and we don t make much progress 6

Selection Reproduction Deterministic selection 1. Rank all the individuals using the fitness function and choose the best k to survive 2. Replace the rest with offspring Can lead fast convergence (and local optima) Stochastic selection Randomly choice of the k to survive Slower to converge and could lose good solutions The unique thing about GAs is the ability of solutions to inherit properties from other solutions in the population The basic way to perform a crossover operation is to splice together parts of the state description from each parent Reproduction Mutation There are many different ways to choose crossover point(s) for reproduction: Single-point: choose the center, or some optimal point in the state description take the first half of one parent, the second half of the other Random: choose the split point randomly (or proportional to the parents fitness scores) Uniform: choose each element of the state description independently, at random (or proportional to fitness) Ect... There are also a variety of ways to mutate individuals in the population The first question to consider it who to mutate Alter the most fit? Least fit? Random? Mutate children only, or surviving parents as well? How many to mutate? The second question is how to mutate Totally arbitrarily? Mutate to a better neighbor? GAs and Creativity Genetic Algorithm Example The objectives of the GA search: The population as a whole is trying to converge to an optimal solution Because solutions can evolve from a variety of factors, without prodding from us as to which direction to go (as in local search), very novel problem solutions can be found/discovered Constructing a jumbo jet (Cannot guarantee optimal solution): The shape of the wings Be able to fly Fuel-efficient Limited size Weight Stable Intact Etc... 7

References Summary 1) Roger Eriksson, Department of Computing Science, College Skövde, Sweden Iterative improvement algorithms keep only a single state in memory. Can get stuck in local extrema/optima Local search methods are more appropriate for solving complete search and optimization problems State spaces can be prohibitively large The goal is different than with systematic search strategies Summary Next Time! There are several effective ways of escaping local optima for local searching, which exploit different properties: Random restarting tries several times from different parts of the search space Simulated annealing allows for a variety of moves by searching stochastically (it is complete and optimal given a slow enough cooling schedule) Game Playing! Section 6.1-6.4 Local beam search keeps track of k states rather than just one, Genetic algorithm is a variant of stochastic local beam search 8