CS 188: Artificial Intelligence

Similar documents
CS 188: Artificial Intelligence

K-Consistency. CS 188: Artificial Intelligence. K-Consistency. Strong K-Consistency. Constraint Satisfaction Problems II

CS 188: Artificial Intelligence

Announcements. CS 188: Artificial Intelligence Fall Reminder: CSPs. Today. Example: 3-SAT. Example: Boolean Satisfiability.

CS 188: Artificial Intelligence Fall 2008

Today. CS 188: Artificial Intelligence Fall Example: Boolean Satisfiability. Reminder: CSPs. Example: 3-SAT. CSPs: Queries.

What is Search For? CS 188: Artificial Intelligence. Example: Map Coloring. Example: N-Queens. Example: N-Queens. Constraint Satisfaction Problems

CS 188: Artificial Intelligence Fall 2011

What is Search For? CSE 473: Artificial Intelligence. Example: N-Queens. Example: N-Queens. Example: Map-Coloring 4/7/17

CS 188: Artificial Intelligence Spring Today

Announcements. CS 188: Artificial Intelligence Spring Production Scheduling. Today. Backtracking Search Review. Production Scheduling

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence. Recap: Search

Announcements. CS 188: Artificial Intelligence Spring Today. A* Review. Consistency. A* Graph Search Gone Wrong

Announcements. Reminder: CSPs. Today. Example: N-Queens. Example: Map-Coloring. Introduction to Artificial Intelligence

CSE 473: Artificial Intelligence

Constraint Satisfaction Problems (CSPs)

CS 4100 // artificial intelligence

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Fall 2008

Announcements. CS 188: Artificial Intelligence Fall Large Scale: Problems with A* What is Search For? Example: N-Queens

CS 343: Artificial Intelligence

Space of Search Strategies. CSE 573: Artificial Intelligence. Constraint Satisfaction. Recap: Search Problem. Example: Map-Coloring 11/30/2012

Constraint Satisfaction Problems

Constraint Satisfaction Problems

Constraint Satisfaction

Lecture 6: Constraint Satisfaction Problems (CSPs)

Constraint Satisfaction Problems

Constraint Satisfaction. AI Slides (5e) c Lin

Recap: Search Problem. CSE 473: Artificial Intelligence. Space of Search Strategies. Constraint Satisfaction. Example: N-Queens 4/9/2012

CS 188: Artificial Intelligence. What is Search For? Constraint Satisfaction Problems. Constraint Satisfaction Problems

Constraint Satisfaction Problems. Chapter 6

Example: Map-Coloring. Constraint Satisfaction Problems Western Australia. Example: Map-Coloring contd. Outline. Constraint graph

What is Search For? CS 188: Artificial Intelligence. Constraint Satisfaction Problems

Announcements. Homework 1: Search. Project 1: Search. Midterm date and time has been set:

Announcements. Homework 4. Project 3. Due tonight at 11:59pm. Due 3/8 at 4:00pm

CS 188: Artificial Intelligence. Recap Search I

Local Search Methods. CS 188: Artificial Intelligence Fall Announcements. Hill Climbing. Hill Climbing Diagram. Today

Material. Thought Question. Outline For Today. Example: Map-Coloring EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

CS 188: Artificial Intelligence Fall 2011

Announcements. CS 188: Artificial Intelligence Fall 2010

Outline. Best-first search

10/11/2017. Constraint Satisfaction Problems II. Review: CSP Representations. Heuristic 1: Most constrained variable

Chapter 6 Constraint Satisfaction Problems

Constraint Satisfaction Problems

What is Search For? CS 188: Ar)ficial Intelligence. Constraint Sa)sfac)on Problems Sep 14, 2015

CS 771 Artificial Intelligence. Constraint Satisfaction Problem

CSE 573: Artificial Intelligence

Constraint Satisfaction Problems

Reading: Chapter 6 (3 rd ed.); Chapter 5 (2 nd ed.) For next week: Thursday: Chapter 8

Constraint Satisfaction Problems. A Quick Overview (based on AIMA book slides)

Constraint Satisfaction Problems

Announcements. CS 188: Artificial Intelligence Spring Today. Example: Map-Coloring. Example: Cryptarithmetic.

Constraint Satisfaction Problems

Map Colouring. Constraint Satisfaction. Map Colouring. Constraint Satisfaction

Week 8: Constraint Satisfaction Problems

Constraint Satisfaction Problems

Announcements. CS 188: Artificial Intelligence Fall Robot motion planning! Today. Robotics Tasks. Mobile Robots

CS 188: Artificial Intelligence Fall Announcements

Outline. Best-first search

CS W4701 Artificial Intelligence

Australia Western Australia Western Territory Northern Territory Northern Australia South Australia South Tasmania Queensland Tasmania Victoria

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany, Course on Artificial Intelligence,

Constraint Satisfaction Problems

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 6 February, 2018

AI Fundamentals: Constraints Satisfaction Problems. Maria Simi

CS 4100/5100: Foundations of AI

Constraint Satisfaction Problems. Chapter 6

Artificial Intelligence

Constraint Satisfaction Problems. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe

Artificial Intelligence Constraint Satisfaction Problems

Constraint Satisfaction Problems Part 2

Spezielle Themen der Künstlichen Intelligenz

Constraints. CSC 411: AI Fall NC State University 1 / 53. Constraints

CMU-Q Lecture 8: Optimization I: Optimization for CSP Local Search. Teacher: Gianni A. Di Caro

Informed search algorithms. Chapter 4

Review Adversarial (Game) Search ( ) Review Constraint Satisfaction ( ) Please review your quizzes and old CS-271 tests

Announcements. Project 0: Python Tutorial Due last night

Games and Adversarial Search II Alpha-Beta Pruning (AIMA 5.3)

Hill Climbing. Assume a heuristic value for each assignment of values to all variables. Maintain an assignment of a value to each variable.

Constraint Satisfaction Problems

Constraint Satisfaction Problems

Informed search algorithms. Chapter 4

Constraint Satisfaction. CS 486/686: Introduction to Artificial Intelligence

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Constraint Satisfaction Problems Chapter 3, Section 7 and Chapter 4, Section 4.4 AIMA Slides cstuart Russell and Peter Norvig, 1998 Chapter 3, Section

Local Search. (Textbook Chpt 4.8) Computer Science cpsc322, Lecture 14. May, 30, CPSC 322, Lecture 14 Slide 1

Constraint Satisfaction Problems

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search for CSPs

Local Search and Constraint Reasoning. CS 510 Lecture 3 October 6, 2009

ARTIFICIAL INTELLIGENCE (CS 370D)

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Artificial Intelligence

Lecture 18. Questions? Monday, February 20 CS 430 Artificial Intelligence - Lecture 18 1

CS 188: Artificial Intelligence Spring Recap Search I

Recap Search I. CS 188: Artificial Intelligence Spring Recap Search II. Example State Space Graph. Search Problems.

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat:

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 23 January, 2018

Transcription:

CS 188: Artificial Intelligence CSPs II + Local Search Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.]

Last time: CSPs CSPs: Variables Domains Constraints Implicit (provide code to compute) Explicit (provide a list of the legal tuples) Unary / Binary / N-ary Goals: Here: find any solution Also: find all, find best, etc.

Last time: Backtracking

Last time: Arc Consistency of an Entire CSP A simple form of propagation makes sure all arcs are consistent: WA NT SA Q NSW V Important: If X loses a value, neighbors of X need to be rechecked! Arc consistency detects failure earlier than forward checking Can be run as a preprocessor or after each assignment What s the downside of enforcing arc consistency? Remember: Delete from the tail!

Improving Backtracking General-purpose ideas give huge gains in speed but it s all still NP-hard Filtering: Can we detect inevitable failure early? Ordering: Which variable should be assigned next? (MRV) In what order should its values be tried? (LCV) Structure: Can we exploit the problem structure?

Structure

Problem Structure Extreme case: independent subproblems Example: Tasmania and mainland do not interact Independent subproblems are identifiable as connected components of constraint graph Suppose a graph of n variables can be broken into subproblems of only c variables: Worst-case solution cost is O((n/c)(d c )), linear in n E.g., n = 80, d = 2, c =20 2 80 = 4 billion years at 10 million nodes/sec (4)(2 20 ) = 0.4 seconds at 10 million nodes/sec

Tree-Structured CSPs Theorem: if the constraint graph has no loops, the CSP can be solved in O(n d 2 ) time Compare to general CSPs, where worst-case time is O(d n ) This property also applies to probabilistic reasoning (later): an example of the relation between syntactic restrictions and the complexity of reasoning

Tree-Structured CSPs Algorithm for tree-structured CSPs: Order: Choose a root variable, order variables so that parents precede children Remove backward: For i = n : 2, apply RemoveInconsistent(Parent(X i ),X i ) Assign forward: For i = 1 : n, assign X i consistently with Parent(X i ) Runtime: O(n d 2 ) (why?)

Tree-Structured CSPs Claim 1: After backward pass, all root-to-leaf arcs are consistent Proof: Each X Y was made consistent at one point and Y s domain could not have been reduced thereafter (because Y s children were processed before Y) Claim 2: If root-to-leaf arcs are consistent, forward assignment will not backtrack Proof: Induction on position Why doesn t this algorithm work with cycles in the constraint graph? Note: we ll see this basic idea again with Bayes nets

Nearly Tree-Structured CSPs Conditioning: instantiate a variable, prune its neighbors' domains Cutset conditioning: instantiate (in all ways) a set of variables such that the remaining constraint graph is a tree Cutset size c gives runtime O( (d c ) (n-c) d 2 ), very fast for small c

Cutset Conditioning Choose a cutset SA Instantiate the cutset (all possible ways) Compute residual CSP for each assignment SA SA SA d c Solve the residual CSPs (tree structured), removing any inconsistent domain values w.r.t. cutset assignment (n-c)d 2

Cutset Quiz Find the smallest cutset for the graph below.

Iterative Improvement

Iterative Algorithms for CSPs Local search methods typically work with complete states, i.e., all variables assigned To apply to CSPs: Take an assignment with unsatisfied constraints Operators reassign variable values No fringe! Live on the edge. Algorithm: While not solved, Variable selection: randomly select any conflicted variable Value selection: min-conflicts heuristic: Choose a value that violates the fewest constraints I.e., hill climb with h(n) = total number of violated constraints Can get stuck in local minima (we ll come back to this idea in a few slides)

Example: 4-Queens States: 4 queens in 4 columns (4 4 = 256 states) Operators: move queen in column Goal test: no attacks Evaluation: c(n) = number of attacks

Video of Demo Iterative Improvement n Queens

Performance of Min-Conflicts Runtime of min-conflicts is on n-queens is roughly independent of problem size! Why?? Solutions are densely distributed in state space Given random initial state, can solve n-queens in almost constant time for arbitrary n with high probability (e.g., n = 10,000,000) in ~50 steps! The same appears to be true for any randomly-generated CSP except in a narrow range of the ratio

Summary: CSPs CSPs are a special kind of search problem: States are partial assignments Goal test defined by constraints Basic solution: backtracking search Speed-ups: Ordering Filtering Structure Iterative min-conflicts is often effective in practice

Local Search

Local Search Tree search keeps unexplored alternatives on the fringe (ensures completeness) Local search: improve a single option until you can t make it better (no fringe!) New successor function: local changes Generally much faster and more memory efficient (but incomplete and suboptimal)

Hill Climbing Simple, general idea: Start wherever Repeat: move to the best neighboring state If no neighbors better than current, quit What s bad about this approach? Complete? Optimal? What s good about it?

Hill Climbing Diagram

Hill Climbing Quiz Starting from X, where do you end up? Starting from Y, where do you end up? Starting from Z, where do you end up?

Simulated Annealing Idea: Escape local maxima by allowing downhill moves But make them rarer as time goes on Shake! Shake! 25

Simulated Annealing Theoretical guarantee: If T decreased slowly enough, will converge to optimal state! Is this an interesting guarantee? Sounds like magic, but reality is reality: The more downhill steps you need to escape a local optimum, the less likely you are to ever make them all in a row

Beam Search Like greedy hillclimbing search, but keep K states at all times: Greedy Search Beam Search Variables: beam size, encourage diversity? The best choice in MANY practical settings Complete? Optimal? Why do we still need optimal methods?

Gradient Methods Continuous state spaces Problem! Cannot select optimal successor Discretization or random sampling Choose from a finite number of choices Continuous optimization: Gradient ascent Take a step along the gradient (vector of partial derivatives) What if you can t compute gradient? i.e. maybe you can only sample the function Estimate gradient from samples! Stochastic gradient descent We will return to this in neural networks / deep learning

Genetic Algorithms Genetic algorithms use a natural selection metaphor Keep best N hypotheses at each step (selection) based on a fitness function Also have pairwise crossover operators, with optional mutation to give variety Possibly the most misunderstood, misapplied (and even maligned) technique around

Example: N-Queens Why does crossover make sense here? When wouldn t it make sense? What would mutation be? What would a good fitness function be?

Next Time: Adversarial Search!