Chapter 4. Divide-and-Conquer. Copyright 2007 Pearson Addison-Wesley. All rights reserved.

Similar documents
Divide-and-Conquer. The most-well known algorithm design strategy: smaller instances. combining these solutions

CSC Design and Analysis of Algorithms

CSC Design and Analysis of Algorithms. Lecture 6. Divide and Conquer Algorithm Design Technique. Divide-and-Conquer

Divide-and-Conquer. Dr. Yingwu Zhu

CSC Design and Analysis of Algorithms. Lecture 6. Divide and Conquer Algorithm Design Technique. Divide-and-Conquer

Divide and Conquer. Algorithm Fall Semester

Divide and Conquer 4-0

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Divide and Conquer

CS Divide and Conquer

CS Divide and Conquer

DIVIDE & CONQUER. Problem of size n. Solution to sub problem 1

CS483 Analysis of Algorithms Lecture 03 Divide-n-Conquer

17/05/2018. Outline. Outline. Divide and Conquer. Control Abstraction for Divide &Conquer. Outline. Module 2: Divide and Conquer

Unit-2 Divide and conquer 2016

We can use a max-heap to sort data.

CS473 - Algorithms I

A 0 A 1... A i 1 A i,..., A min,..., A n 1 in their final positions the last n i elements After n 1 passes, the list is sorted.

Chapter 3:- Divide and Conquer. Compiled By:- Sanjay Patel Assistant Professor, SVBIT.

CSC Design and Analysis of Algorithms. Lecture 5. Decrease and Conquer Algorithm Design Technique. Decrease-and-Conquer

Divide and Conquer. Algorithm D-and-C(n: input size)

Lecture 8: Mergesort / Quicksort Steven Skiena

5. DIVIDE AND CONQUER I

Divide-and-Conquer CSE 680

MA/CSSE 473 Day 17. Divide-and-conquer Convex Hull. Strassen's Algorithm: Matrix Multiplication. (if time, Shell's Sort)

CSCE 411 Design and Analysis of Algorithms

CPS 616 TRANSFORM-AND-CONQUER 7-1

CS 6463: AT Computational Geometry Spring 2006 Convex Hulls Carola Wenk

Advanced Algorithms. Problem solving Techniques. Divide and Conquer הפרד ומשול

Divide and Conquer Sorting Algorithms and Noncomparison-based

Mergesort again. 1. Split the list into two equal parts

Module 2: Classical Algorithm Design Techniques

Chapter 1 Divide and Conquer Algorithm Theory WS 2013/14 Fabian Kuhn

Lecture 7. Transform-and-Conquer

CSE 421 Closest Pair of Points, Master Theorem, Integer Multiplication

Problem. Input: An array A = (A[1],..., A[n]) with length n. Output: a permutation A of A, that is sorted: A [i] A [j] for all. 1 i j n.

Lecture 19 Sorting Goodrich, Tamassia

4. Sorting and Order-Statistics

The divide and conquer strategy has three basic parts. For a given problem of size n,

Quicksort (Weiss chapter 8.6)

CSC Design and Analysis of Algorithms. Lecture 7. Transform and Conquer I Algorithm Design Technique. Transform and Conquer

Simple Sorting Algorithms

PESIT Bangalore South Campus Hosur road, 1km before Electronic City, Bengaluru -100 Department of MCA

Quick Sort. CSE Data Structures May 15, 2002

Practical Session #11 - Sort properties, QuickSort algorithm, Selection

UNIT-2 DIVIDE & CONQUER

Sorting. Sorting in Arrays. SelectionSort. SelectionSort. Binary search works great, but how do we create a sorted array in the first place?

Divide-and-Conquer. Combine solutions to sub-problems into overall solution. Break up problem of size n into two equal parts of size!n.

Quicksort. Repeat the process recursively for the left- and rightsub-blocks.

Sorting. There exist sorting algorithms which have shown to be more efficient in practice.

Chapter 1 Divide and Conquer Algorithm Theory WS 2015/16 Fabian Kuhn

CS 6402 DESIGN AND ANALYSIS OF ALGORITHMS QUESTION BANK

CSC Design and Analysis of Algorithms

Data Structure Lecture#17: Internal Sorting 2 (Chapter 7) U Kang Seoul National University

COMP Data Structures

Lecture #2. 1 Overview. 2 Worst-Case Analysis vs. Average Case Analysis. 3 Divide-and-Conquer Design Paradigm. 4 Quicksort. 4.

DATA STRUCTURES AND ALGORITHMS

Divide and Conquer CISC4080, Computer Algorithms CIS, Fordham Univ. Instructor: X. Zhang

CPSC 320 Midterm 2 Thursday March 13th, 2014

CS 171: Introduction to Computer Science II. Quicksort

COMP 3403 Algorithm Analysis Part 2 Chapters 4 5. Jim Diamond CAR 409 Jodrey School of Computer Science Acadia University

Divide and Conquer CISC4080, Computer Algorithms CIS, Fordham Univ. Acknowledgement. Outline

Lecture 2: Getting Started

EECS 2011M: Fundamentals of Data Structures

Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, Merge Sort & Quick Sort

Divide and Conquer. Divide and Conquer

Divide & Conquer Design Technique

CSC 273 Data Structures

Algorithm Design and Analysis

Test 1 Review Questions with Solutions

Randomized Algorithms, Quicksort and Randomized Selection

Chapter 5. Divide and Conquer. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

COT 5407: Introduction to Algorithms. Giri Narasimhan. ECS 254A; Phone: x3748

CS S-11 Sorting in Θ(nlgn) 1. Base Case: A list of length 1 or length 0 is already sorted. Recursive Case:

Divide-and-Conquer. Divide-and conquer is a general algorithm design paradigm:

Divide and Conquer. Design and Analysis of Algorithms Andrei Bulatov

CSC Design and Analysis of Algorithms. Lecture 5. Decrease and Conquer Algorithm Design Technique. Decrease-and-Conquer

SORTING AND SELECTION

QuickSort

Searching, Sorting. part 1

CS 112 Introduction to Computing II. Wayne Snyder Computer Science Department Boston University

2/26/2016. Divide and Conquer. Chapter 6. The Divide and Conquer Paradigm

Analysis of Algorithms. Unit 4 - Analysis of well known Algorithms

Lecture 5: Sorting Part A

DIVIDE AND CONQUER ALGORITHMS ANALYSIS WITH RECURRENCE EQUATIONS

Algorithms - Ch2 Sorting

Chapter 1 Divide and Conquer Algorithm Theory WS 2014/15 Fabian Kuhn

The divide-and-conquer paradigm involves three steps at each level of the recursion: Divide the problem into a number of subproblems.

Jana Kosecka. Linear Time Sorting, Median, Order Statistics. Many slides here are based on E. Demaine, D. Luebke slides

Sorting. Data structures and Algorithms

IS 709/809: Computational Methods in IS Research. Algorithm Analysis (Sorting)

CSE 202 Divide-and-conquer algorithms. Fan Chung Graham UC San Diego

Better sorting algorithms (Weiss chapter )

Data Structures and Algorithms

BM267 - Introduction to Data Structures

Algorithms and Data Structures (INF1) Lecture 7/15 Hua Lu

Data Structures and Algorithms CSE 465

Sorting. CSE 143 Java. Insert for a Sorted List. Insertion Sort. Insertion Sort As A Card Game Operation. CSE143 Au

LECTURE NOTES OF ALGORITHMS: DESIGN TECHNIQUES AND ANALYSIS

CSE 421 Algorithms: Divide and Conquer

Sorting and Selection

Transcription:

Chapter 4 Divide-and-Conquer Copyright 2007 Pearson Addison-Wesley. All rights reserved.

Divide-and-Conquer The most-well known algorithm design strategy: 2. Divide instance of problem into two or more smaller instances 4. Solve smaller instances recursively 6. Obtain solution to original (larger) instance by combining these solutions 4-2

Divide-and-Conquer Technique (cont.) a problem of size n subproblem 1 of size n/2 subproblem 2 of size n/2 a solution to subproblem 1 a solution to subproblem 2 a solution to the original problem 4-3

Divide-and-Conquer Examples Sorting: mergesort and quicksort Binary tree traversals Binary search (?) Multiplication of large integers Matrix multiplication: Strassen s algorithm Closest-pair and convex-hull algorithms 4-4

General Divide-and-Conquer Recurrence T(n) ) = at(n/b n/b) ) + f (n) where f(n) Θ(n d ), d 0 Master Theorem: : If a < b d, T(n) Θ(n d ) If a = b d, T(n) Θ(n d log n) If a > b d, T(n) Θ(n log b a ) Note: The same results hold with O instead of Θ. Examples: T(n) ) = 4T(n/2) 4 + n T(n)? T(n) ) = 4T(n/2) 4 + n 2 T(n)? T(n) ) = 4T(n/2) 4 + n 3 T(n)? 4-5

Mergesort Split array A[0..n-1] in two about equal halves and make copies of each half in arrays B and C Sort arrays B and C recursively Merge sorted arrays B and C into array A as follows: Repeat the following until no elements remain in one of the arrays: compare the first elements in the remaining unprocessed portions of the arrays copy the smaller of the two into A, while incrementing the index indicating the unprocessed portion of that array Once all elements in one of the arrays are processed, copy the remaining unprocessed elements from the other array into A. 4-6

Pseudocode of Mergesort 4-7

Pseudocode of Merge 4-8

Mergesort Example 8 3 2 9 7 1 5 4 8 3 2 9 7 1 5 4 8 3 2 9 7 1 5 4 8 3 2 9 7 1 5 4 3 8 2 9 1 7 4 5 2 3 8 9 1 4 5 7 1 2 3 4 5 7 8 9 4-9

Analysis of Mergesort All cases have same efficiency: Θ(n log n) Number of comparisons in the worst case is close to theoretical minimum for comparison-based sorting: log 2 n! n log 2 n - 1.44n Space requirement: Θ(n) ) (not( in-place) Can be implemented without recursion (bottom-up) 4-10

Quicksort Select a pivot (partitioning element) here, the first element Rearrange the list so that all the elements in the first s positions are smaller than or equal to the pivot and all the elements in the remaining n-s positions are larger than or equal to the pivot (see next slide for an algorithm) p A[i] p A[i] p Exchange the pivot with the last element in the first (i.e., ) subarray the pivot is now in its final position Sort the two subarrays recursively 4-11

Partitioning Algorithm 4-12

Quicksort Example 5 3 1 9 8 2 4 7 4-13

Analysis of Quicksort Best case: split in the middle Θ(n log n) Worst case: sorted array! Θ(n 2 ) Average case: random arrays Θ(n log n) Improvements: better pivot selection: median of three partitioning switch to insertion sort on small subfiles elimination of recursion These combine to 20-25% improvement Considered the method of choice for internal sorting of large files (n( 10000) 4-14

Binary Search Very efficient algorithm for searching in sorted array: K vs A[0]... A[m] ]... A[n-1] If K = A[m], stop (successful search); otherwise, continue searching by the same method in A[0..m-1] if K < A[m] and in A[m+1.. +1..n-1] if K > A[m] l 0; r n-1 while l r do m (l+r)/2 if K = A[m] ] return m else if K < A[m] r m-1 else l m+1 4-15

Analysis of Binary Search Time efficiency worst-case recurrence: C w (n)) = 1 + C w ( n/2 ), C w (1) = 1 solution: C w (n)) = log 2 (n+1) +1) This is VERY fast: e.g., C w (10 6 ) = 20 Optimal for searching a sorted array Limitations: must be a sorted array (not linked list) Bad (degenerate) example of divide-and-conquer Has a continuous counterpart called bisection method for solving equations in one unknown f(x) = 0 (see Sec. 12.4) 4-16

Binary Tree Algorithms Binary tree is a divide-and-conquer ready structure! Ex. 1: Classic traversals (preorder, inorder, postorder) Algorithm Inorder(T) if T a a Inorder(T left ) b c b c print(root of T) d e d e Inorder(T right ) Efficiency: Θ(n) 4-17

Binary Tree Algorithms (cont.) Ex. 2: Computing the height of a binary tree TL TR h(t) ) = max{h(t L ), h(t R )} + 1 if T and h( ) ) = -1 Efficiency: Θ(n) 4-18

Multiplication of Large Integers Consider the problem of multiplying two (large) n-digit integers represented by arrays of their digits such as: A = 12345678901357986429 B = 87654321284820912836 The grade-school algorithm: a 1 a 2 a n b 1 b 2 b n (d 10 ) d 11 d 12 d 1n (d 20 ) d 21 d 22 d 2n (d n0 ) d d d n1 n2 nn Efficiency: n 2 one-digit multiplications 4-19

First Divide-and-Conquer Algorithm A small example: A B where A = 2135 and B = 4014 A = (21 10 2 + 35), B = (40 10 2 + 14) So, A B = (21 10 2 + 35) (40 10 2 + 14) = 21 40 10 4 + (21 14 + 35 40) 10 2 + 35 14 In general, if A = A 1 A 2 and B = B 1 B 2 (where A and B are n-digit, A 1, A 2, B 1, B 2 are n/2-digit numbers), A B = A 1 B 1 10 n + (A 1 B 2 + A 2 B 1 ) 10 n/2 + A 2 B 2 Recurrence for the number of one-digit multiplications M(n): M(n) ) = 4M(n/2), M(1) = 1 Solution: M(n) ) = n 2 4-20

Second Divide-and-Conquer Algorithm A B = A 1 B 1 10 n + (A 1 B 2 + A 2 B 1 ) 10 n/2 + A 2 B 2 The idea is to decrease the number of multiplications from 4 to 3: (A 1 + A 2 ) (B 1 + B 2 ) = A 1 B 1 + (A 1 B 2 + A 2 B 1 ) + A 2 B 2, I.e., (A 1 B 2 + A 2 B 1 ) = (A 1 + A 2 ) (B 1 + B 2 ) - A 1 B 1 - A 2 B 2, which requires only 3 multiplications at the expense of (4-1) extra add/sub. Recurrence for the number of multiplications M(n): M(n) ) = 3M(n/2), 3 M(1) = 1 Solution: M(n) ) = 3 log 2 n = n log 2 3 n 1.585 4-21

Example of Large-Integer Multiplication 2135 4014 4-22

Strassen s Matrix Multiplication Strassen observed [1969] that the product of two matrices can be computed as follows: C 00 C 01 A 00 A 01 B 00 B 01 = * C 10 C 11 A 10 A 11 B 10 B 11 = M 1 + M 4 - M 5 + M 7 M 3 + M 5 M 2 + M 4 M 1 + M 3 - M 2 + M 6 4-23

Formulas for Strassen s Algorithm M 1 = (A 00 + A 11 ) (B 00 + B 11 11 ) M 2 = (A 10 + A 11 ) B 00 M 3 = A 00 (B 01 - B 11 11 ) M 4 = A 11 (B 10 - B 00 00 ) M 5 = (A 00 + A 01 ) B 11 M 6 = (A 10 - A 00 ) (B 00 + B 01 01 ) M = (A - A ) (B + B ) 4-24

Analysis of Strassen s Algorithm If n is not a power of 2, matrices can be padded with zeros. Number of multiplications: M(n) ) = 7M(n/2), M(1) = 1 Solution: M(n) ) = 7 log 2 n = n log 2 7 n 2.807 vs. n 3 of brute-force alg. Algorithms with better asymptotic efficiency are known but they are even more complex. 4-25

Closest-Pair Problem by Divide-and-Conquer Step 1 Divide the points given into two subsets S 1 and S 2 by a vertical line x = c so that half the points lie to the left or on the line and half the points lie to the right or on the line. 4-26

Closest Pair by Divide-and-Conquer (cont.) Step 2 Find recursively the closest pairs for the left and right subsets. Step 3 Set d = min{d 1, d 2 } We can limit our attention to the points in the symmetric vertical strip of width 2d 2 as possible closest pair. Let C 1 and C 2 be the subsets of points in the left subset S 1 and of the right subset S 2, respectively, that lie in this vertical strip. The points in C 1 and C 2 are stored in increasing order of their y coordinates, which is maintained by merging during the execution of the next step. Step 4 For every point P(x,y) ) in C 1, we inspect points in C 2 that may be closer to P than d.. There can be no more than 6 such points (because d d 2 )! 4-27

Closest Pair by Divide-and-Conquer: Worst Case The worst case scenario is depicted below: 4-28

Efficiency of the Closest-Pair Algorithm Running time of the algorithm is described by T(n) ) = 2T(n/2) + M(n), where M(n) O(n) By the Master Theorem (with a = 2, b = 2, d = 1) T(n) O(n log n) 4-29

Quickhull Algorithm Convex hull: : smallest convex set that includes given points Assume points are sorted by x-coordinate values Identify extreme points P 1 and P 2 (leftmost and rightmost) Compute upper hull recursively: find point P max that is farthest away from line P 1 P 2 compute the upper hull of the points to the left of line P 1 P max compute the upper hull of the points to the left of line P max P 2 Compute lower hull in a similar manner P max P 2 P 1 4-30

Efficiency of Quickhull Algorithm Finding point farthest away from line P 1 P 2 can be done in linear time Time efficiency: worst case: Θ(n 2 ) (as quicksort) average case: Θ(n) ) (under reasonable assumptions about distribution of points given) If points are not initially sorted by x-coordinate value, this can be accomplished in O(n log n) ) time Several O(n log n) algorithms for convex hull are known 4-31