CS 540: Introduction to Artificial Intelligence

Similar documents
CS 540-1: Introduction to Artificial Intelligence

Uninformed Search Methods. Informed Search Methods. Midterm Exam 3/13/18. Thursday, March 15, 7:30 9:30 p.m. room 125 Ag Hall

Midterm Examination CS540-2: Introduction to Artificial Intelligence

Midterm Examination CS 540-2: Introduction to Artificial Intelligence

CSCI-630 Foundations of Intelligent Systems Fall 2015, Prof. Zanibbi

University of Waterloo Department of Electrical and Computer Engineering ECE 457A: Cooperative and Adaptive Algorithms Midterm Examination

CS 540: Introduction to Artificial Intelligence

1 Tree Search (12 points)

CS-171, Intro to A.I. Mid-term Exam Fall Quarter, 2017

(Due to rounding, values below may be only approximate estimates.) We will supply these numbers as they become available.

Name: UW CSE 473 Midterm, Fall 2014

Monotonicity. Admissible Search: That finds the shortest path to the Goal. Monotonicity: local admissibility is called MONOTONICITY

CS-171, Intro to A.I. Mid-term Exam Winter Quarter, 2016

6.034 Quiz 1 October 8, 2003

6.034 Quiz 1, Spring 2004 Solutions

Introduction to Fall 2014 Artificial Intelligence Midterm Solutions

ˆ The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Introduction to Spring 2007 Artificial Intelligence Midterm Solutions

CS Introduction to Artificial Intelligence

Potential Midterm Exam Questions

Problem 1 Zero: 11% (~20 students), Partial: 66% (~120 students), Perfect: 23% (~43 students)

: Principles of Automated Reasoning and Decision Making Midterm

7 Temporal Models (20 points)

3.6.2 Generating admissible heuristics from relaxed problems

Midterm I. Introduction to Artificial Intelligence. CS 188 Fall You have approximately 3 hours.

Finding optimal configurations Adversarial search

Question 1: Search (30 points)

Midterm I. Introduction to Artificial Intelligence. CS 188 Fall You have approximately 3 hours.

4 Search Problem formulation (23 points)

CS-171, Intro to A.I. Mid-term Exam Fall Quarter, 2013

York University AK/ITEC INTRODUCTION TO DATA STRUCTURES. Final Sample II. Examiner: S. Chen Duration: Three hours

Section Marks Pre-Midterm / 32. Logic / 29. Total / 100

CSC384 Midterm Sample Questions

CS 416, Artificial Intelligence Midterm Examination Fall 2004

6.034 QUIZ 1. Fall 2002

Introduction to Artificial Intelligence Midterm 1. CS 188 Spring You have approximately 2 hours.

CS 4700: Artificial Intelligence

CIS 192: Artificial Intelligence. Search and Constraint Satisfaction Alex Frias Nov. 30 th

Final Exam. Introduction to Artificial Intelligence. CS 188 Spring 2010 INSTRUCTIONS. You have 3 hours.

Last time: Problem-Solving

Spring 2007 Midterm Exam

EECS 492 Midterm #1. Example Questions. Note: Not a complete exam!

6.034 QUIZ 1 Solutons. Fall Problem 1: Rule-Based Book Recommendations (30 points)

CS-171, Intro to A.I. Mid-term Exam Fall Quarter, 2014

CS 4700: Artificial Intelligence

Artificial Intelligence. Chapters Reviews. Readings: Chapters 3-8 of Russell & Norvig.

CS 188: Artificial Intelligence. Recap Search I

Downloaded from ioenotes.edu.np

Program of Study. Artificial Intelligence 1. Shane Torbert TJHSST

Introduction to Fall 2008 Artificial Intelligence Midterm Exam

Artificial Intelligence

CS 730/730W/830: Intro AI

CMPUT 366 Assignment 1

Data Analytics. Qualification Exam, May 18, am 12noon

CPSC 436D Video Game Programming

Artificial Intelligence

CS347 FS2003 Final Exam Model Answers

mywbut.com Informed Search Strategies-II

Artificial Intelligence

Artificial Intelligence

University of Toronto Department of Electrical and Computer Engineering. Midterm Examination. ECE 345 Algorithms and Data Structures Fall 2012

CSCI 4150: Intro. to Artificial Intelligence Midterm Examination

Chapters 3-5 Problem Solving using Search

Homework #6 (Constraint Satisfaction, Non-Deterministic Uncertainty and Adversarial Search) Out: 2/21/11 Due: 2/29/11 (at noon)

Artificial Intelligence

Artificial Intelligence

Problem Score Maximum MC 34 (25/17) = 50 Total 100

TDDC17. Intuitions behind heuristic search. Recall Uniform-Cost Search. Best-First Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Games and Adversarial Search II Alpha-Beta Pruning (AIMA 5.3)

6.034 Quiz 2 October 22, 2008

ANSWERS TO / Midterm, Spring 2002

To earn the extra credit, one of the following has to hold true. Please circle and sign.

Question 1: 25% of students lost more than 2 points Question 2: 50% of students lost 2-4 points; 50% got it entirely correct Question 3: 95% of

TDDC17. Intuitions behind heuristic search. Best-First Search. Recall Uniform-Cost Search. f(n) =... + h(n) g(n) = cost of path from root node to n

CSE 332, Spring 2010, Midterm Examination 30 April 2010

Midterm I - Solution CS164, Spring 2014

Name: CSCI E-220: Artificial Intelligence Midterm Exam, Fall Term 2001

6.034 Quiz 1 October 13,2004

2006/2007 Intelligent Systems 1. Intelligent Systems. Prof. dr. Paul De Bra Technische Universiteit Eindhoven

Advanced A* Improvements

Downloded from: CSITauthority.blogspot.com

COSC-4411(M) Midterm #1

Outline. Informed Search. Recall: Uninformed Search. An Idea. Heuristics Informed search techniques More on heuristics Iterative improvement

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 23 January, 2018

This should prevent residual sexism, racism, favoritism lurking in the unconscious of your professor & TA from biasing grading!!

State Space Search. Many problems can be represented as a set of states and a set of rules of how one state is transformed to another.

Artificial Intelligence

Alpha-Beta Pruning in Mini-Max Algorithm An Optimized Approach for a Connect-4 Game

Heuristic (Informed) Search

Gradient Descent. 1) S! initial state 2) Repeat: Similar to: - hill climbing with h - gradient descent over continuous space

Classification with Decision Tree Induction

CSE 373 Spring 2010: Midterm #1 (closed book, closed notes, NO calculators allowed)

INTRODUCTION TO HEURISTIC SEARCH

Artificial Intelligence. Game trees. Two-player zero-sum game. Goals for the lecture. Blai Bonet

York University AK/ITEC OBJECT-BASED PROGRAMMING. Midterm Test Sample. Examiner: S.Y. Chen Duration: One Hour and Fifteen Minutes

Search and Optimization

Review Adversarial (Game) Search ( ) Review Constraint Satisfaction ( ) Please review your quizzes and old CS-271 tests

Introduction to Fall 2010 Artificial Intelligence Midterm Exam

Today s s lecture. Lecture 3: Search - 2. Problem Solving by Search. Agent vs. Conventional AI View. Victor R. Lesser. CMPSCI 683 Fall 2004

Transcription:

CS 540: Introduction to Artificial Intelligence Midterm Exam: 7:15-9:15 pm, October, 014 Room 140 CS Building CLOSED BOOK (one sheet of notes and a calculator allowed) Write your answers on these pages and show your work. If you feel that a question is not fully specified, state any assumptions you need to make in order to solve the problem. You may use the backs of these sheets for scratch work. Write your name on this page and initial all other pages of this exam (in case a page comes loose during grading). Make sure your exam contains seven problems on nine pages. Name Student ID Problem Score Max Score 1 0 15 3 1 4 14 5 0 6 9 7 10 TOTAL 100

Problem 1 Decision Trees (0 points) Assume that you are given the set of labeled training examples below, where each feature has three possible values a, b, or c. You choose to learn a decision tree and select - as the default output if there are ties. F1 F Output b b + b c + c a + c b - b a - c c - a) What score would the information gain calculation assign to each of the features? Be sure to show all your work (use the back of this or the previous sheet if needed). b) Which feature would be chosen as the root of the decision tree being built? (Break ties in favor of F1 over F.)

c) Consider the following learned tree and TUNE set. The numbers in the leaves represent the number of TRAIN examples reaching that leaf. x F1 z F1 F Output x 4+ y F 7- y z 9+ x + 3+ F y 5- z 8- y x + z y + x z + y z - z z - Using the decision-tree pruning algorithm presented in class, draw and score all the possible pruned versions of the above learned tree and circle the best tree found after the first cycle of the pruning algorithm. d) Circle the search strategy used when creating a decision tree that fits the TRAIN set. i) A* ii) breadth-first search iii) depth-first search iv) hill climbing e) Circle the search strategy used when pruning the learned tree using the TUNE set. i) A* ii) breadth-first search iii) depth-first search iv) hill climbing 3

Problem Search (15 points) Consider the search space below, where S is the start node and G1 and G satisfy the goal test. Arcs are labeled with the cost of traversing them and the estimated cost to a goal is reported inside nodes. For each of the following search strategies, indicate which goal state is reached (if any) and list, in order, all the states popped off of the OPEN list. When all else is equal, nodes should be removed from OPEN in alphabetical order. You can show your work (for cases of partial credit), using the notation presented in class, on the back of the previous page. Best-first search (using f=h) Goal state reached: States popped off OPEN: Iterative Deepening Goal state reached: States popped off OPEN: A* (f = g + h) Goal state reached: States popped off OPEN: A 9 7 3 S 5 1 5 B 3 C 5 8 D 1 E 3 4 G1 0 4 F 1 G 0 4

Problem 3 Expected-Value Calculations (1 points) a) Imagine that we are playing a game involving a pair of fair, six-sided dice. It costs $1 to play the game and one shakes then throws the dice once. The payoff is as follows: If the values of the two dice are different, you win nothing. If the values of the two dice are the same, you win in dollars the number on the dice. (E.g., if you shake a pair of 6 s, you win $6.) Assume you only have $1 in your pocket and you play this game once. After playing it once, what is the expected amount of money you will have? Show your work (you should lump together the calculations for the 30 dice rolls where you win nothing). b) Which of these functions involve an (explicit) expected-value calculation? It is fine to circle none or more than one. i) crossover ii) informationneeded (called the B function in the textbook) iii) simulated annealing iv) remainder (i.e., informationstillremaining) 5

Problem 4 Game Playing (14 points) a) Apply the mini-max algorithm to the partial game tree below, where it is the maximizer s turn to play and the game does not involve randomness. The values estimated by the staticboard evaluator (SBE) are indicated in the leaf nodes (higher scores are better for the maximizer). Write the estimated values of the intermediate nodes inside their circles and indicate the proper move of the maximizer by circling one of the root s outgoing arcs. 9 4-5 8 1 3-9 -4 - b) List one leaf node (if any) in the above game tree whose SBE score need not be computed: Explain why: c) Circle the one answer below that you feel is the most correct. i) Minimax and alpha-beta pruning always return the same choice of move; the advantage of alpha-beta pruning is that it usually uses many fewer cpu cycles. ii) The horizon effect occurs because we can only search a finite number of moves ahead and might not reach board configurations where the game ends. iii) Using the mini-max algorithm to choose moves means our game-playing algorithm cannot take advantage of poor moves by the opponent. 6

Problem 5 Multiple-Choice Questions (0 points) For each question, circle your answer(s). Unless stated otherwise, choose the one best answer. a) Even if we use a beam width of 1 and the same tie-breaking procedure, beam search and hill climbing can return different answers. TRUE FALSE b) If we have an admissible heuristic function, then when a node is expanded and placed in CLOSED, we have always reached it via the shortest possible path. TRUE FALSE c) Which of these is not an ensemble method for supervised learning: i) bagging ii) boosting iii) broadening d) Chi-squared (χ ) pruning is a method for (ok to circle none or more than one): i) game playing ii) overfitting reduction iii) reducing size of OPEN e) If all arcs in a search task cost $7, then uniform-cost search and breadth-first search will definitely produce the same solution: TRUE FALSE f) Which of the following is/are of central importance for genetic algorithms (ok to circle none or more than one): i) admissibility ii) cross over iii) fitness-proportion reproduction g) For small datasets, it is acceptable to skip using a TUNING set and instead use the TESTING set to select good parameter settings. TRUE FALSE h) If best-first search finds some goal node in finite time, then depth-first search will also find a goal node in finite time. TRUE FALSE i) If depth-first search finds some goal node in finite time, then beam search will also find a goal node in finite time. TRUE FALSE j) Which of the following is most similar to simulated annealing: i) cross over ii) decision-tree pruning iii) iterative deepening 7

Problem 6 Key AI Concepts (9 points) Briefly describe each of the following AI concepts and explain each s significance. (Write your answers below the phrases and clearly separate your description and significance.) Test sets description: significance: Fixed-length feature vectors description: significance: Random restarts description: significance: 8

Problem 7 Miscellaneous Questions (10 points) a) The search algorithm A* does not terminate until a goal node is removed from the OPEN list. That goal state may have been in the OPEN list for millions of node expansions, which seems inefficient. Why not terminate A* as soon as a goal node is added to OPEN? Justify your answer with a simple, concrete example that shows doing so would violate an important property of A*. Be sure to explain your example, including which important property of A* is violated. (Don t use more than four nodes in your answer.) b) Imagine that the simulated-annealing algorithm is at node A and has randomly chosen node B as the candidate next state. Assume the temperature equals 7, the heuristic score of A is 5, and of B is 3 (higher heuristic scores are better). What is the probability that node B will be accepted as the next state? Answer: 9