Final Exam. EECS 2011 Prof. J. Elder - 1 -

Similar documents
Suggested Study Strategy

Final Exam. CSE 2011 Prof. J. Elder Last Updated: :41 PM

Search Trees. Chapter 11

Representations of Graphs

Introduction to Algorithms. Lecture 11

Elementary Graph Algorithms

Jana Kosecka. Red-Black Trees Graph Algorithms. Many slides here are based on E. Demaine, D. Luebke slides

CS 251, LE 2 Fall MIDTERM 2 Tuesday, November 1, 2016 Version 00 - KEY

DFS & STRONGLY CONNECTED COMPONENTS

Graph Representation

Course Review for Finals. Cpt S 223 Fall 2008

Graphs. Graph G = (V, E) Types of graphs E = O( V 2 ) V = set of vertices E = set of edges (V V)

COT 6405 Introduction to Theory of Algorithms

Chapter 22. Elementary Graph Algorithms

Design and Analysis of Algorithms

Elementary Graph Algorithms. Ref: Chapter 22 of the text by Cormen et al. Representing a graph:

CSI 604 Elementary Graph Algorithms

Trees and Graphs Shabsi Walfish NYU - Fundamental Algorithms Summer 2006

D. Θ nlogn ( ) D. Ο. ). Which of the following is not necessarily true? . Which of the following cannot be shown as an improvement? D.

Search Trees - 2. Venkatanatha Sarma Y. Lecture delivered by: Assistant Professor MSRSAS-Bangalore. M.S Ramaiah School of Advanced Studies - Bangalore

Minimum Spanning Trees Ch 23 Traversing graphs

COMP 251 Winter 2017 Online quizzes with answers

Solutions to Exam Data structures (X and NV)

Search Trees - 1 Venkatanatha Sarma Y

Comparison Sorts. Chapter 9.4, 12.1, 12.2

Graph Algorithms: Chapters Part 1: Introductory graph concepts

Graph representation

Outlines: Graphs Part-2

Course Review. Cpt S 223 Fall 2009

CSci 231 Final Review

( ) 1 B. 1. Suppose f x

Tutorial. Question There are no forward edges. 4. For each back edge u, v, we have 0 d[v] d[u].

Lecture 10. Elementary Graph Algorithm Minimum Spanning Trees

Cpt S 223 Fall Cpt S 223. School of EECS, WSU

Chapter 2: Basic Data Structures

) $ f ( n) " %( g( n)

AP Computer Science 4325

Direct Addressing Hash table: Collision resolution how handle collisions Hash Functions:

Graph: representation and traversal

& ( D. " mnp ' ( ) n 3. n 2. ( ) C. " n

Course Review. Cpt S 223 Fall 2010

n 2 C. Θ n ( ) Ο f ( n) B. n 2 Ω( n logn)

CS251-SE1. Midterm 2. Tuesday 11/1 8:00pm 9:00pm. There are 16 multiple-choice questions and 6 essay questions.

Basic Graph Algorithms

Taking Stock. IE170: Algorithms in Systems Engineering: Lecture 16. Graph Search Algorithms. Recall BFS

Splay Trees. (Splay Trees) Data Structures and Programming Spring / 27

( ) ( ) C. " 1 n. ( ) $ f n. ( ) B. " log( n! ) ( ) and that you already know ( ) ( ) " % g( n) ( ) " #&

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

n 2 ( ) ( ) + n is in Θ n logn

Graph Algorithms. Definition

CS301 - Data Structures Glossary By

CS Elementary Graph Algorithms

CS Elementary Graph Algorithms

Graph implementations :

logn D. Θ C. Θ n 2 ( ) ( ) f n B. nlogn Ο n2 n 2 D. Ο & % ( C. Θ # ( D. Θ n ( ) Ω f ( n)

Outline. 1 Introduction. 2 Algorithm. 3 Example. 4 Analysis. 5 References. Idea

Algorithm Design and Analysis

Algorithms. AVL Tree

9. The expected time for insertion sort for n keys is in which set? (All n! input permutations are equally likely.)

Course Review for. Cpt S 223 Fall Cpt S 223. School of EECS, WSU

Topics on the Midterm

(2,4) Trees. 2/22/2006 (2,4) Trees 1

CSE Data Structures and Introduction to Algorithms... In Java! Instructor: Fei Wang. Mid-Term Exam. CSE2100 DS & Algorithms 1

Unit 2: Algorithmic Graph Theory

A6-R3: DATA STRUCTURE THROUGH C LANGUAGE

( ) + n. ( ) = n "1) + n. ( ) = T n 2. ( ) = 2T n 2. ( ) = T( n 2 ) +1

Analysis of Algorithms

( ). Which of ( ) ( ) " #& ( ) " # g( n) ( ) " # f ( n) Test 1

CS 341: Algorithms. Douglas R. Stinson. David R. Cheriton School of Computer Science University of Waterloo. February 26, 2019

( ) D. Θ ( ) ( ) Ο f ( n) ( ) Ω. C. T n C. Θ. B. n logn Ο

Outline. Computer Science 331. Definitions: Paths and Their Costs. Computation of Minimum Cost Paths

n 2 ( ) ( ) Ο f ( n) ( ) Ω B. n logn Ο

Week 5. 1 Analysing BFS. 2 Depth-first search. 3 Analysing DFS. 4 Dags and topological sorting. 5 Detecting cycles. CS 270 Algorithms.

( ) n 3. n 2 ( ) D. Ο

COSC 2007 Data Structures II Final Exam. Part 1: multiple choice (1 mark each, total 30 marks, circle the correct answer)

Advanced Set Representation Methods

Multiple Choice. Write your answer to the LEFT of each problem. 3 points each

Test 1 - Closed Book Last 4 Digits of Student ID # Multiple Choice. Write your answer to the LEFT of each problem. 3 points each

Second Semester - Question Bank Department of Computer Science Advanced Data Structures and Algorithms...

Graphs. Graphs. Representation of Graphs. CSE 680 Prof. Roger Crawfis. Two standard ways. Graph G = (V, E)» V = set of vertices

AVL Trees Goodrich, Tamassia, Goldwasser AVL Trees 1

Trees. (Trees) Data Structures and Programming Spring / 28

Prelim 2 Solution. CS 2110, November 19, 2015, 5:30 PM Total. Sorting Invariants Max Score Grader

Dynamic Access Binary Search Trees

Dynamic Access Binary Search Trees

1. [1 pt] What is the solution to the recurrence T(n) = 2T(n-1) + 1, T(1) = 1

21# 33# 90# 91# 34# # 39# # # 31# 98# 0# 1# 2# 3# 4# 5# 6# 7# 8# 9# 10# #

Red-Black Trees. Based on materials by Dennis Frey, Yun Peng, Jian Chen, and Daniel Hood

Recitation 9. Prelim Review

Lecture Summary CSC 263H. August 5, 2016

Module 2: Classical Algorithm Design Techniques

Hash Tables. CS 311 Data Structures and Algorithms Lecture Slides. Wednesday, April 22, Glenn G. Chappell

AVL Tree Definition. An example of an AVL tree where the heights are shown next to the nodes. Adelson-Velsky and Landis

INSTITUTE OF AERONAUTICAL ENGINEERING

Unit 5F: Layout Compaction

Unit 3: Layout Compaction

CS 270 Algorithms. Oliver Kullmann. Analysing BFS. Depth-first search. Analysing DFS. Dags and topological sorting.

Graph Theory. Many problems are mapped to graphs. Problems. traffic VLSI circuits social network communication networks web pages relationship

Selection, Bubble, Insertion, Merge, Heap, Quick Bucket, Radix

Final Examination CSE 100 UCSD (Practice)

Transcription:

Final Exam Ø Wed Apr 11 2pm 5pm Aviva Tennis Centre Ø Closed Book Ø Format similar to midterm Ø Will cover whole course, with emphasis on material after midterm (maps and hash tables, binary search, loop invariants, binary search trees, sorting, graphs) - 1 -

Suggested Study Strategy Ø Review and understand the slides. Ø Do all of the practice problems provided. Ø Review all assignments, especially assignments 3 and 4. Ø Read the textbook, especially where concepts and methods are not yet clear to you. Ø Do extra practice problems from the textbook. Ø Review the midterm and solutions for practice writing this kind of exam. Ø Practice writing clear, succint pseudocode! - 2 -

End of Term Review - 3 -

Summary of Topics 1. Maps & Hash Tables 2. Binary Search & Loop Invariants 3. Binary Search Trees 4. Sorting 5. Graphs - 4 -

Summary of Topics 1. Maps & Hash Tables 2. Binary Search & Loop Invariants 3. Binary Search Trees 4. Sorting 5. Graphs - 5 -

Maps Ø A map models a searchable collection of key-value entries Ø The main operations of a map are for searching, inserting, and deleting items Ø Multiple entries with the same key are not allowed Ø Applications: q address book q student-record database - 6 -

Performance of a List-Based Map Ø Performance: q put, get and remove take O(n) time since in the worst case (the item is not found) we traverse the entire sequence to look for an item with the given key Ø The unsorted list implementation is effective only for small maps - 7 -

Hash Tables Ø A hash table is a data structure that can be used to make map operations faster. Ø While worst-case is still O(n), average case is typically O(1). - 8 -

Compression Functions Ø Division: q h 2 (y) = y mod N q The size N of the hash table is usually chosen to be a prime (on the assumption that the differences between hash keys y are less likely to be multiples of primes). Ø Multiply, Add and Divide (MAD): q h 2 (y) = [(ay + b) mod p] mod N, where ² p is a prime number greater than N ² a and b are integers chosen at random from the interval [, p 1], with a >. - 9 -

Collision Handling Ø Collisions occur when different elements are mapped to the same cell Ø Separate Chaining: q Let each cell in the table point to a linked list of entries that map there q Separate chaining is simple, but requires additional memory outside the table Ø 1 25-612-1 2 3 Ø Ø 4 451-229-4 981-11-4-1 -

Open Addressing: Linear Probing Ø Open addressing: the colliding item is placed in a different cell of the table Ø Linear probing handles collisions by placing the colliding item in the next (circularly) available table cell Ø Each table cell inspected is referred to as a probe Ø Colliding items lump together, so that future collisions cause a longer sequence of probes Ø Example: q h(x) = x mod 13 q Insert keys 18, 41, 22, 44, 59, 32, 31, 73, in this order 41 18 44 59 32 22 31 73 1 2 3 4 5 6 7 8 9 1 11 12-11 -

Open Addressing: Double Hashing Ø Double hashing is an alternative open addressing method that uses a secondary hash function h (k) in addition to the primary hash function h(x). Ø Suppose that the primary hashing i=h(k) leads to a collision. Ø We then iteratively probe the locations (i + jh (k)) mod N for j =, 1,, N - 1 Ø The secondary hash function h (k) cannot have zero values Ø N is typically chosen to be prime. Ø Common choice of secondary hash function h (k): q h (k) = q - k mod q, where ² q < N ² q is a prime Ø The possible values for h (k) are 1, 2,, q - 12 -

Summary of Topics 1. Maps & Hash Tables 2. Binary Search & Loop Invariants 3. Binary Search Trees 4. Sorting 5. Graphs - 13 -

Ordered Maps and Dictionaries Ø If keys obey a total order relation, can represent a map or dictionary as an ordered search table stored in an array. Ø Can then support a fast find(k) using binary search. q at each step, the number of candidate items is halved q terminates after a logarithmic number of steps q Example: find(7) l l 1 3 4 5 7 8 9 11 14 16 18 19 m h 1 3 4 5 7 8 9 11 14 16 18 19 m h 1 3 4 5 7 8 9 11 14 16 18 19 l m h 1 3 4 5 7 8 9 11 14 16 18 19 l=m =h - 14 -

Loop Invariants Ø Binary search can be implemented as an iterative algorithm (it could also be done recursively). Ø Loop Invariant: An assertion about the current state useful for designing, analyzing and proving the correctness of iterative algorithms. - 15 -

Establishing Loop Invariant From the Pre-Conditions on the input instance we must establish the loop invariant. - 16 -

Maintain Loop Invariant By Induction the computation will always be in a safe location. Þ S() ü ï ï ý Þ " isi, ( ) Þ ï Þ " isi, ( ) Þ Si ( + 1) ï þ - 17 -

Ending The Algorithm Ø Define Exit Condition Exit Ø Termination: With sufficient progress, the exit condition will be met. km Exit Ø When we exit, we know q exit condition is true q loop invariant is true from these we must establish Exit the post conditions. - 18 -

Summary of Topics 1. Maps & Hash Tables 2. Binary Search & Loop Invariants 3. Binary Search Trees 4. Sorting 5. Graphs - 19 -

Binary Search Trees Ø Insertion Ø Deletion Ø AVL Trees Ø Splay Trees - 2 -

Binary Search Tree All nodes in left subtree Any node All nodes in right subtree 38 25 51 17 31 42 63 4 21 28 35 4 49 55 71-21 -

Ø Cut sub-tree in half. Search: Define Step Ø Determine which half the key would be in. Ø Keep that half. key 17 38 25 51 17 31 42 63 4 21 28 35 4 49 55 71 If key < root, then key is in left half. If key = root, then key is found If key > root, then key is in right half. - 22 -

Insertion (For Dictionary) Ø To perform operation insert(k, o), we search for key k (using TreeSearch) Ø Suppose k is not already in the tree, and let w be the leaf reached by the search Ø We insert k at node w and expand w into an internal node Ø Example: insert 5 w < 6 6 2 > 1 4 8 > 9 2 1 4 8 w 5 9-23 -

Insertion Ø Suppose k is already in the tree, at node v. Ø We continue the downward search through v, and let w be the leaf reached by the search Ø Note that it would be correct to go either left or right at v. We go left by convention. Ø We insert k at node w and expand w into an internal node Ø Example: insert 6 2 < > 1 4 8 > 6 9 2 w 1 4 8 w 6 6 9-24 -

Deletion Ø To perform operation remove(k), we search for key k Ø Suppose key k is in the tree, and let v be the node storing k Ø If node v has a leaf child w, we remove v and w from the tree with operation removeexternal(w), which removes w and its parent Ø Example: remove 4 < 6 6 2 > 1 4 v 8 w 5 9 2 1 5 8 9-25 -

Deletion (cont.) Ø Now consider the case where the key k to be removed is stored at a node v whose children are both internal q we find the internal node w that follows v in an inorder traversal q we copy the entry stored at w into node v q we remove node w and its left child z (which must be a leaf) by means of operation removeexternal(z) Ø Example: remove 3 1 3 v 1 5 v 2 8 2 8 w 5 6 9 6 9 z - 26 -

Performance Ø Consider a dictionary with n items implemented by means of a binary search tree of height h q the space used is O(n) q methods find, insert and remove take O(h) time Ø The height h is O(n) in the worst case and O(log n) in the best case Ø It is thus worthwhile to balance the tree (next topic)! - 27 -

AVL Trees Ø AVL trees are balanced. Ø An AVL Tree is a binary search tree in which the heights of siblings can differ by at most 1. 4 44 2 3 17 78 1 2 32 5 88 1 1 1 48 62 height - 28 -

Insertion Ø Imbalance may occur at any ancestor of the inserted node. height = 3 7 height = 4 7 2 4 1 8 Insert(2) 3 4 Problem! 1 8 1 3 1 5 2 3 1 5 1 2-29 -

Ø Step 1: Search Insertion: Rebalancing Strategy q Starting at the inserted node, traverse toward the root until an imbalance is discovered. height = 4 7 3 4 Problem! 1 8 2 3 1 5 1 2-3 -

Insertion: Rebalancing Strategy Ø Step 2: Repair q The repair strategy is called trinode restructuring. q 3 nodes x, y and z are distinguished: ² z = the parent of the high sibling ² y = the high sibling ² x = the high child of the high sibling q We can now think of the subtree 3 rooted at z as consisting of these 3 nodes plus their 4 subtrees 1 2 2 3 height = 4 7 1 4 Problem! 8 1 5-31 -

Insertion: Trinode Restructuring Example Note that y is the middle value. height = h z h-1 y Restructure h-1 y h-3 T 3 h-2 x h-2 z h-2 x h-3 T 2 h-3 h-3 T T 1 T 2 T 3 T T 1 one is h-3 & one is h-4 one is h-3 & one is h-4-32 -

Removal Ø Imbalance may occur at an ancestor of the removed node. height = 3 7 height = 3 7 2 4 1 8 Remove(8) 2 4 Problem! 1 3 1 5 1 3 1 5-33 -

Ø Step 1: Search Removal: Rebalancing Strategy q Starting at the location of the removed node, traverse toward the root until an imbalance is discovered. height = 3 7 2 4 Problem! 1 3 1 5-34 -

Removal: Rebalancing Strategy Ø Step 2: Repair q We again use trinode restructuring. q 3 nodes x, y and z are distinguished: height = 3 7 ² z = the parent of the high sibling ² y = the high sibling ² x = the high child of the high sibling (if children are equally high, keep chain linear) 1 3 2 4 1 Problem! 5-35 -

Removal: Trinode Restructuring - Case 1 Note that y is the middle value. height = h z Restructure h or h-1 y h-1 y h-3 T 3 h-2 x h-1 or h-2 z h-2 x T T 1 h-2 or h-3 T 2 T T 1 T 2 T 3 h-3 or h-3 & h- 4 h-2 or h-3 h-3 h-3 or h-3 & h- 4-36 -

Ø Step 2: Repair Removal: Rebalancing Strategy q Unfortunately, trinode restructuring may reduce the height of the subtree, causing another imbalance further up the tree. q Thus this search and repair process must be repeated until we reach the root. - 37 -

Splay Trees Ø Self-balancing BST Ø Invented by Daniel Sleator and Bob Tarjan Ø Allows quick access to recently accessed elements D. Sleator Ø Bad: worst-case O(n) Ø Good: average (amortized) case O(log n) Ø Often perform better than other BSTs in practice R. Tarjan - 38 -

Splaying Ø Splaying is an operation performed on a node that iteratively moves the node to the root of the tree. Ø In splay trees, each BST operation (find, insert, remove) is augmented with a splay operation. Ø In this way, recently searched and inserted elements are near the top of the tree, for quick access. - 39 -

Zig-Zig Ø Performed when the node x forms a linear chain with its parent and grandparent. q i.e., right-right or left-left z x y y x T 3 T 4 zig-zig T 1 T 2 z T 1 T 2 T 3 T 4-4 -

Zig-Zag Ø Performed when the node x forms a non-linear chain with its parent and grandparent q i.e., right-left or left-right z x y zig-zag z y T 1 x T 4 T 1 T 2 T 3 T 4 T 2 T 3-41 -

Zig Ø Performed when the node x has no grandparent q i.e., its parent is the root y zig x x T 4 w y w T 3 T 1 T 2 T 3 T 4 T 1 T 2-42 -

Summary of Topics 1. Maps & Hash Tables 2. Binary Search & Loop Invariants 3. Binary Search Trees 4. Sorting 5. Graphs - 43 -

Ø Comparison Sorting q Selection Sort q Bubble Sort q Insertion Sort q Merge Sort q Heap Sort q Quick Sort Ø Linear Sorting q Counting Sort q Radix Sort q Bucket Sort Sorting Algorithms - 44 -

Comparison Sorts Ø Comparison Sort algorithms sort the input by successive comparison of pairs of input elements. Ø Comparison Sort algorithms are very general: they make no assumptions about the values of the input elements. 4 3 7 11 2 2 1 3 5 e.g.,3 11? - 45 -

Sorting Algorithms and Memory Ø Some algorithms sort by swapping elements within the input array Ø Such algorithms are said to sort in place, and require only O(1) additional memory. Ø Other algorithms require allocation of an output array into which values are copied. Ø These algorithms do not sort in place, and require O(n) additional memory. 4 3 7 11 2 2 1 3 5 swap - 46 -

Stable Sort Ø A sorting algorithm is said to be stable if the ordering of identical keys in the input is preserved in the output. Ø The stable sort property is important, for example, when entries with identical keys are already ordered by another criterion. Ø (Remember that stored with each key is a record containing some useful information.) 4 3 7 11 2 2 1 3 5 1 2 2 3 3 4 5 7 11-47 -

Summary of Comparison Sorts Algorithm Best Case Worst Case Average Case In Place Stable Comments Selection n 2 n 2 Yes Yes Bubble n n 2 Yes Yes Insertion n n 2 Yes Yes Good if often almost sorted Merge n log n n log n No Yes Good for very large datasets that require swapping to disk Heap n log n n log n Yes No Best if guaranteed n log n required Quick n log n n 2 n log n Yes No Usually fastest in practice - 48 -

Comparison Sort: Decision Trees Ø For a 3-element array, there are 6 external nodes. Ø For an n-element array, there are n! external nodes. - 49 -

Comparison Sort Ø To store n! external nodes, a decision tree must have a height of at least logn! Ø Worst-case time is equal to the height of the binary decision tree. Thus T(n) Ω( logn! ) wherelogn! = n log i log n / 2 i =1 Thus T(n) Ω(nlogn) n/ 2 i =1 Ω(nlogn) Thus MergeSort & HeapSort are asymptotically optimal. - 5 -

Linear Sorts? Comparison sorts are very general, but are W ( nlog n) Faster sorting may be possible if we can constrain the nature of the input. - 51 -

CountingSort Input: Output: 1 1 3 1 1 3 1 2 1 1 1 2 2 1 1 Index: 1 2 3 4 5 6 7 8 9 1 11 12 13 14 15 16 17 18 Value v: Location of next record with digit v. 1 5 2 14 3 17 Algorithm: Go through the records in order putting them where they go. - 52 -

RadixSort 2 24 1 25 2 25 3 25 3 33 1 34 3 34 1 43 2 43 3 44 i+1 Is sorted wrt first i digits. Sort wrt i+1st digit. - 53-1 25 1 34 1 43 2 24 2 25 2 43 3 25 3 33 3 34 3 44 Is sorted wrt first i+1 digits. These are in the correct order because sorted wrt high order digit

RadixSort 2 24 1 25 2 25 3 25 3 33 1 34 3 34 1 43 2 43 3 44 i+1 Is sorted wrt first i digits. Sort wrt i+1st digit. - 54-1 25 1 34 1 43 2 24 2 25 2 43 3 25 3 33 3 34 3 44 Is sorted wrt first i+1 digits. These are in the correct order because was sorted & stable sort left sorted

Bucket Sort - 55 -

Summary of Topics 1. Maps & Hash Tables 2. Binary Search & Loop Invariants 3. Binary Search Trees 4. Sorting 5. Graphs - 56 -

Graphs Ø Definitions & Properties Ø Implementations Ø Depth-First Search Ø Breadth-First Search - 57 -

Properties Property 1 Σ v deg(v) = 2 E Proof: each edge is counted twice Property 2 In an undirected graph with no self-loops and no multiple edges E V ( V - 1)/2 Proof: each vertex has degree at most ( V 1) Q: What is the bound for a digraph? A : E V (V 1) Notation V E deg(v) number of vertices number of edges degree of vertex v Example n V = 4 n E = 6 n deg(v) = 3-58 -

DFS Algorithm Pattern DFS(G) Precondition: G is a graph Postcondition: all vertices in G have been visited for each vertex u V[G] color[u] = BLACK //initialize vertex for each vertex u V[G] if color[u] = BLACK //as yet unexplored DFS-Visit(u) total work = θ(v ) - 59 -

DFS Algorithm Pattern DFS-Visit (u) Precondition: vertex u is undiscovered Postcondition: all vertices reachable from u have been processed colour[u] RED for each v Adj[u] //explore edge (u,v) if color[v] = BLACK DFS-Visit(v) colour[u] GRAY total work = Adj[v] = θ(e) v V Thus running time = θ(v + E) (assuming adjacency list structure) - 6 -

Other Variants of Depth-First Search Ø The DFS Pattern can also be used to q Compute a forest of spanning trees (one for each call to DFSvisit) encoded in a predecessor list π[u] q Label edges in the graph according to their role in the search (see textbook) ² Tree edges, traversed to an undiscovered vertex ² Forward edges, traversed to a descendent vertex on the current spanning tree ² Back edges, traversed to an ancestor vertex on the current spanning tree ² Cross edges, traversed to a vertex that has already been discovered, but is not an ancestor or a descendent - 61 -

BFS Algorithm Pattern BFS(G,s) Precondition: G is a graph, s is a vertex in G Postcondition: all vertices in G reachable from s have been visited for each vertex u V[G] color[u] BLACK //initialize vertex colour[s] RED Q.enqueue(s) while Q u Q.dequeue() for each v Adj[u] //explore edge (u,v) if color[v] = BLACK colour[v] RED Q.enqueue(v) colour[u] GRAY - 62 -

Applications Ø BFS traversal can be specialized to solve the following problems in O( V + E ) time: q Compute the connected components of G q Compute a spanning forest of G q Find a simple cycle in G, or report that G is a forest q Given two vertices of G, find a path in G between them with the minimum number of edges, or report that no such path exists - 63 -

Breadth-First Search Input: Graph G = ( V, E) (directed or undirected) and source vertex s Î V. Output: dv [ ] = shortest path distance d ( sv, ) from sto v, " vî V. p [ v] = u such that ( u, v) is last edge on a shortest path from s to v. Ø Idea: send out search wave from s. Ø Keep track of progress by colouring vertices: q Undiscovered vertices are coloured black q Just discovered vertices (on the wavefront) are coloured red. q Previously discovered vertices (behind wavefront) are coloured grey. - 64 -

BFS(G,s) Breadth-First Search Algorithm: Properties Precondition: G is a graph, s is a vertex in G Postcondition: d[u] = shortest distance δ[u] and π[u] = predecessor of u on shortest paths from s to each vertex u in G for each vertex u V[G] d[u] π[u] null color[u] = BLACK //initialize vertex colour[s] RED d[s] Q.enqueue(s) while Q u Q.dequeue() for each v Adj[u] //explore edge (u,v) if color[v] = BLACK colour[v] RED d[v] d[u] + 1 π[v] u colour[u] GRAY Q.enqueue(v) - 65 - Ø Q is a FIFO queue. Ø Each vertex assigned finite d value at most once. Ø Q contains vertices with d values {i,, i, i+1,, i+1} Ø d values assigned are monotonically increasing over time.

Summary of Topics 1. Maps & Hash Tables 2. Binary Search & Loop Invariants 3. Binary Search Trees 4. Sorting 5. Graphs - 66 -