CS 4700: Artificial Intelligence

Similar documents
CS 4700: Artificial Intelligence

Artificial Intelligence

CS 4700: Artificial Intelligence

Lecture 4: Search 3. Victor R. Lesser. CMPSCI 683 Fall 2010

Searching with Partial Information

Class Overview. Introduction to Artificial Intelligence COMP 3501 / COMP Lecture 2. Problem Solving Agents. Problem Solving Agents: Assumptions

Informed State Space Search B4B36ZUI, LS 2018

Searching. Assume goal- or utilitybased. Next task to achieve is to determine the best path to the goal

Problem Solving & Heuristic Search

Class Overview. Introduction to Artificial Intelligence COMP 3501 / COMP Lecture 2: Search. Problem Solving Agents

CS 380: Artificial Intelligence Lecture #4

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 23 January, 2018

Today s s lecture. Lecture 3: Search - 2. Problem Solving by Search. Agent vs. Conventional AI View. Victor R. Lesser. CMPSCI 683 Fall 2004

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

Informed Search Methods

CPS 170: Artificial Intelligence Search

Artificial Intelligence

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Problem Solving and Search

Search. CS 3793/5233 Artificial Intelligence Search 1

Lecture 2: Fun with Search. Rachel Greenstadt CS 510, October 5, 2017

Search Algorithms. Uninformed Blind search. Informed Heuristic search. Important concepts:

Heuristic (Informed) Search

Wissensverarbeitung. - Search - Alexander Felfernig und Gerald Steinbauer Institut für Softwaretechnologie Inffeldgasse 16b/2 A-8010 Graz Austria

Search : Lecture 2. September 9, 2003

Artificial Intelligence

ARTIFICIAL INTELLIGENCE. Informed search

DFS. Depth-limited Search

3 SOLVING PROBLEMS BY SEARCHING

Uninformed Search. Reading: Chapter 4 (Tuesday, 2/5) HW#1 due next Tuesday

Informed Search and Exploration

COMP9414: Artificial Intelligence Informed Search

Outline. Best-first search

COMP9414: Artificial Intelligence Informed Search

ARTIFICIAL INTELLIGENCE LECTURE 3. Ph. D. Lect. Horia Popa Andreescu rd year, semester 5

CS 520: Introduction to Artificial Intelligence. Lectures on Search

CS 331: Artificial Intelligence Informed Search. Informed Search

CS 730/730W/830: Intro AI

mywbut.com Informed Search Strategies-II

Uninformed Search (Ch )

Expert Systems (Graz) Heuristic Search (Klagenfurt) - Search -

CS 4700: Foundations of Artificial Intelligence. Bart Selman. Search Techniques R&N: Chapter 3

CS 331: Artificial Intelligence Informed Search. Informed Search

Solving problems by searching

Outline for today s lecture. Informed Search. Informed Search II. Review: Properties of greedy best-first search. Review: Greedy best-first search:

Informed search algorithms. Chapter 4

Set 2: State-spaces and Uninformed Search. ICS 271 Fall 2015 Kalev Kask

Informed search strategies (Section ) Source: Fotolia

Informed search methods

Artificial Intelligence

Informed search algorithms. Chapter 4

AI: Week 2. Tom Henderson. Fall 2014 CS 5300

Lecture 9. Heuristic search, continued. CS-424 Gregory Dudek

HW#1 due today. HW#2 due Monday, 9/09/13, in class Continue reading Chapter 3

Ar#ficial)Intelligence!!

Ar#ficial)Intelligence!!

Outline. Informed Search. Recall: Uninformed Search. An Idea. Heuristics Informed search techniques More on heuristics Iterative improvement

Uninformed Search Methods

Solving Problems by Searching. Artificial Intelligence Santa Clara University 2016

INTRODUCTION TO HEURISTIC SEARCH

SRI VIDYA COLLEGE OF ENGINEERING & TECHNOLOGY REPRESENTATION OF KNOWLEDGE PART A

Outline. Best-first search

Informed Search. Best-first search. Greedy best-first search. Intelligent Systems and HCI D7023E. Romania with step costs in km

Informed search. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016

Artificial Intelligence

Chapter S:IV. IV. Informed Search

Heuristic Search INTRODUCTION TO ARTIFICIAL INTELLIGENCE BIL 472 SADI EVREN SEKER

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Artificial Intelligence. Sina Institute, University of Birzeit

Midterm Examination CS540-2: Introduction to Artificial Intelligence

Informed Search CS457 David Kauchak Fall 2011

Lecture 4: Informed/Heuristic Search

Basic Search. Fall Xin Yao. Artificial Intelligence: Basic Search

Artificial Intelligence Informed search. Peter Antal

Downloaded from ioenotes.edu.np

CSE 40171: Artificial Intelligence. Informed Search: A* Search

CS 416, Artificial Intelligence Midterm Examination Fall 2004

Last time: Problem-Solving

Informed Search Algorithms

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit

Uninformed Search. Day 1 & 2 of Search. Russel & Norvig Chap. 3. Material in part from

2006/2007 Intelligent Systems 1. Intelligent Systems. Prof. dr. Paul De Bra Technische Universiteit Eindhoven

Advanced A* Improvements

Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

Artificial Intelligence Uninformed search

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Robot Programming with Lisp

CS 188: Artificial Intelligence. Recap Search I

A.I.: Informed Search Algorithms. Chapter III: Part Deux

Artificial Intelligence p.1/49. n-queens. Artificial Intelligence p.2/49. Initial state: the empty board or a board with n random

Informed/Heuristic Search

Informed Search A* Algorithm

Uninformed Search. Chapter 3

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 19 January, 2018

AI: problem solving and search

CS 771 Artificial Intelligence. Informed Search

Uniformed Search (cont.)

Chapter 3. A problem-solving agent is a kind of goal-based agent. It decide what to do by finding sequences of actions that lead to desirable states.

Heuristic Search and Advanced Methods

Foundations of AI. 4. Informed Search Methods. Heuristics, Local Search Methods, Genetic Algorithms. Wolfram Burgard & Bernhard Nebel

CMPSCI 250: Introduction to Computation. Lecture #24: General Search, DFS, and BFS David Mix Barrington 24 March 2014

Transcription:

CS 4700: Foundations of Artificial Intelligence Fall 2017 Instructor: Prof. Haym Hirsh Lecture 7

Extra Credit Opportunity: Lecture Today 4:15pm Gates G01 Learning to See Without a Teacher Phillip Isola UC Berkeley Over the past decade, learning-based methods have driven rapid progress in computer vision. However, most such methods still require a human "teacher" in the loop. Humans provide labeled examples of target behavior, and also define the objective that the learner tries to satisfy. The way learning plays out in nature is rather different: ecological scenarios involve huge quantities of unlabeled data and only a few supervised lessons provided by a teacher (e.g., a parent). I will present two directions toward computer vision algorithms that learn more like ecological agents. The first involves learning from unlabeled data. I will show how objects and semantics can emerge as a natural consequence of predicting raw data, rather than labels. The second is an approach to data prediction where we not only learn to make predictions, but also learn the objective function that scores the predictions. In effect, the algorithm learns not just how to solve a problem, but also what exactly needs to be solved in order to generate realistic outputs. Finally, I will talk about my ongoing efforts toward sensorimotor systems that not only learn from provided data but also act to sample more data on their own.

Homework 1 Available on course website Due: Tuesday, February 28, at the start of class, 8:40am Submission through Gradescope, instructions to be given

Today Informed Search (R&N Ch 3) 4701 last 15 minutes Adversarial Search (T&N Ch 3) Thursday, February 23

Special Cases of Best-First Search g(s): Cost of going from initial state to s h(s): Estimate of cost of going from s to a goal state f(s) = g(s): Uniform Cost Search f(s) = h(s): Greedy Best-First Search f(s) = g(s) + h(s): A* Search

Special Cases of Best-First Search g(s): Cost of going from initial state to s h(s): Estimate of cost of going from s to a goal state f(s) = g(s): Uniform Cost Search f(s) = h(s): Greedy Best-First Search f(s) = g(s) + h(s): A* Search

Special Cases of Best-First Search g(s): Cost of going from initial state to s h(s): Estimate of cost of going from s to a goal state f(s) = g(s): Uniform Cost Search f(s) = h(s): Greedy Best-First Search f(s) = g(s) + h(s): A* Search

A* Search f(s) = g(s) + h(s) is an estimate of the cost of a solution that goes through s Notation: h*(s): The true (unknown) cost of going from s to the cheapest solution f*(s) = g(s) + h*(s): The true (unknown) cost of the cheapest solution that goes through s

A* Search Admissible heuristic evaluation functions h(s) h(s) is admissible if for all s 0 h(s) h*(s) In other words, h never overestimates h* Examples: Route finding: Straight-line distance between s and goal 8-puzzle (and 15-puzzle and all tile-sliding puzzles like it): Number of out of place tiles Sum of how many spaces each tile is from where it should be

A* Search If the branching factor b of a problem is finite, the costs of all operators are positive, and h(s) is admissible Then A* search is guaranteed to find an optimal solution (Sometimes worded as A* is admissible )

A* Search Consistent heuristic evaluation functions h(s) h(s) is consistent if for all s, h(s) cost(s,s ) + h(s ) for all successors s A form of the triangle inequality (the sum of the lengths of two sides of a triangle are greater than the length of the third side) Does not refer to h* If h is consistent then h is admissible, hence:

A* Search If the branching factor b of a problem is finite, the costs of all operators are positive, and h(s) is admissible Then A* search is guaranteed to find an optimal solution (Sometimes worded as A* is admissible )

A* Search If the branching factor b of a problem is finite, the costs of all operators are positive, and h(s) is admissible Then A* search is guaranteed to find an optimal solution (Sometimes worded as A* is admissible )

A* Search If the branching factor b of a problem is finite, the costs of all operators are positive, and h(s) is consistent Then A* search is guaranteed to find an optimal solution

A* Search Consider any other search method that is given the same information as A* (initial state, goal, operators, and h) and is guaranteed to return an optimal solution: A* is optimally efficient: No other search method is guaranteed to expand fewer nodes than A* (Ignoring the effects of randomly breaking ties between states with the same f(s) value)

A* Search A* was introduced in 1966, has been studied extensively, and there are many other results. For example: Let = h*(initialstate) h(initialstate). If the search space is a tree with reversible actions, the runtime of A* is O(b ).

Other Search Methods

Other Search Methods Uninformed Search

Bidirectional Search Initial State Goal State

Bidirectional Search Perform a breadth-first search from both ends Interleave searches: Expand 1 node of tree coming from initial state Expand 1 node of tree coming from goal state At each node instead of testing goal(s) test whether s matches any node in the other search s open or closed lists

Analysis: Bidirectional Search Criterion DFS BFS Bidirectional Search Complete? No Yes Yes Optimal? No Yes Yes Time? O(b d ) O(b d 2) Space? O(bd) O(b d ) O(b d 2)

Other Search Methods Informed Search

Branch-and-Bound Search Use with depth-first search on problems with f(s) = g(s) and thathave multiple solutions Keep track of the cheapest solution found thus far Don t expand states whose value exceed this cheapest-thus-far value

Iterative Deepening Version of Best-First Search Modifies best-first search to have a cost bound: Do not expand a state whose f value is greater than the given bound Keep track of the lowest f value among all the states that exceeded the given bound Use that value as the cost bound for the next iteration of the costbounded search When f(s) = g(s) this is called Iterative Lengthening Search If costs of operators are all 1, this is Iterative Deepening Search When f(s) = g(s) + h(s) this is called Iterative Deepening A* (IDA*)

Simplified Memory-bounded A* (SMA*) Run A* as usual until memory is exhausted no room to add a new state to the open list When memory is exhausted, delete from open the most expensive pending state, but save it s value with its parent state When other parts of the space have been expanded the parent state can be expanded to find the state with that value

Hill-climbing Search Also known as greedy local search ( Hill climbing : Originally developed for problems where larger f is better ) hillclimbing(s,ops): If goal(s) then return s Else succs {s s = apply(s,o) where o ops}; best argmin s succs f(s ) ; hillclimbing(best)

Hill-climbing Search 8-queens puzzle Initial state: 8 queens on board, 1 per column, all in row 1 Operators: Move queen i to a different row in its column f(s) = # of pairs of queens that are attacking each other

Hill-climbing Search Not complete Main problem: Local optima All of the states you can reach from the given state are worse than it, so you re at a peak, but there are better states elsewhere in the search space Common solution approach: Random restart Multiple possible initial states (example: 8-queens) Select successors probabilistically based on f value

Simulated Annealing Similar to hillclimbing keeps a single state SA(s,ops): If goal(s) then return s Else pick a random o in ops; s apply(o,s) If f(s ) < f(s) then SA(s,ops) Else with some small probability keep s anyway, otherwise pick another o