Programming in Haskell Aug-Nov 2015

Similar documents
Introduction to Programming: Lecture 6

Sorting. Weiss chapter , 8.6

PROGRAMMING IN HASKELL. CS Chapter 6 - Recursive Functions

Sorting. Sorting in Arrays. SelectionSort. SelectionSort. Binary search works great, but how do we create a sorted array in the first place?

Quicksort (Weiss chapter 8.6)

Algorithm Efficiency & Sorting. Algorithm efficiency Big-O notation Searching algorithms Sorting algorithms

11. Divide and Conquer

Better sorting algorithms (Weiss chapter )

Scientific Computing. Algorithm Analysis

Shell CSCE 314 TAMU. Functions continued

We can use a max-heap to sort data.

Reading for this lecture (Goodrich and Tamassia):

Programming II (CS300)

Lecture 2: Algorithm Analysis

Mergesort again. 1. Split the list into two equal parts

Lecture 6: Sequential Sorting

CS 171: Introduction to Computer Science II. Quicksort

CSC Design and Analysis of Algorithms

Key question: how do we pick a good pivot (and what makes a good pivot in the first place)?

CSC Design and Analysis of Algorithms. Lecture 6. Divide and Conquer Algorithm Design Technique. Divide-and-Conquer

Lecture 8: Mergesort / Quicksort Steven Skiena

CS 171: Introduction to Computer Science II. Quicksort

Selection (deterministic & randomized): finding the median in linear time

SEARCHING, SORTING, AND ASYMPTOTIC COMPLEXITY. Lecture 11 CS2110 Spring 2016

Sorting. Task Description. Selection Sort. Should we worry about speed?

Faster Sorting Methods

CS302 Topic: Algorithm Analysis #2. Thursday, Sept. 21, 2006

DATA STRUCTURES AND ALGORITHMS

Divide and Conquer 4-0

15 150: Principles of Functional Programming Sorting Integer Lists

Data structures. More sorting. Dr. Alex Gerdes DIT961 - VT 2018

Jana Kosecka. Linear Time Sorting, Median, Order Statistics. Many slides here are based on E. Demaine, D. Luebke slides

Divide-and-Conquer. The most-well known algorithm design strategy: smaller instances. combining these solutions

Searching in General

CS126 Final Exam Review

CSE 373: Data Structures and Algorithms

DIVIDE & CONQUER. Problem of size n. Solution to sub problem 1

Sorting: Quick Sort. College of Computing & Information Technology King Abdulaziz University. CPCS-204 Data Structures I

Algorithms and Data Structures (INF1) Lecture 7/15 Hua Lu

CSE373: Data Structure & Algorithms Lecture 21: More Comparison Sorting. Aaron Bauer Winter 2014

Problem. Input: An array A = (A[1],..., A[n]) with length n. Output: a permutation A of A, that is sorted: A [i] A [j] for all. 1 i j n.

17/05/2018. Outline. Outline. Divide and Conquer. Control Abstraction for Divide &Conquer. Outline. Module 2: Divide and Conquer

THE UNIVERSITY OF WESTERN AUSTRALIA

Sorting. Quicksort analysis Bubble sort. November 20, 2017 Hassan Khosravi / Geoffrey Tien 1

CS125 : Introduction to Computer Science. Lecture Notes #40 Advanced Sorting Analysis. c 2005, 2004 Jason Zych

CSE 332: Data Structures & Parallelism Lecture 12: Comparison Sorting. Ruth Anderson Winter 2019

CSC Design and Analysis of Algorithms. Lecture 6. Divide and Conquer Algorithm Design Technique. Divide-and-Conquer

Chapter 4. Divide-and-Conquer. Copyright 2007 Pearson Addison-Wesley. All rights reserved.

CS 61B Summer 2005 (Porter) Midterm 2 July 21, SOLUTIONS. Do not open until told to begin

Sorting. Sorting. Stable Sorting. In-place Sort. Bubble Sort. Bubble Sort. Selection (Tournament) Heapsort (Smoothsort) Mergesort Quicksort Bogosort

Sorting Algorithms. + Analysis of the Sorting Algorithms

Chapter 1 Divide and Conquer Algorithm Theory WS 2014/15 Fabian Kuhn

COSC242 Lecture 7 Mergesort and Quicksort

Overview of Sorting Algorithms

CSE373: Data Structure & Algorithms Lecture 18: Comparison Sorting. Dan Grossman Fall 2013

A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 3 Parallel Prefix, Pack, and Sorting

Algorithms in Systems Engineering ISE 172. Lecture 12. Dr. Ted Ralphs

Randomized Algorithms: Selection

Quicksort. Repeat the process recursively for the left- and rightsub-blocks.

QuickSort

Sorting. There exist sorting algorithms which have shown to be more efficient in practice.

Functional Programming in Haskell Part I : Basics

CS 360 Exam 1 Fall 2014 Name. 1. Answer the following questions about each code fragment below. [8 points]

Unit-2 Divide and conquer 2016

Programming Language Concepts: Lecture 14

CSC 273 Data Structures

CS S-11 Sorting in Θ(nlgn) 1. Base Case: A list of length 1 or length 0 is already sorted. Recursive Case:

Question 7.11 Show how heapsort processes the input:

Programming in Haskell Aug-Nov 2015

The divide and conquer strategy has three basic parts. For a given problem of size n,

Divide-and-Conquer. Dr. Yingwu Zhu

Foundations of Computation

ITEC2620 Introduction to Data Structures

SEARCHING, SORTING, AND ASYMPTOTIC COMPLEXITY

CS1 Lecture 30 Apr. 2, 2018

1 (15 points) LexicoSort

Algorithmic Analysis and Sorting, Part Two. CS106B Winter

Divide and Conquer Algorithms: Advanced Sorting

Merge Sort

CSCI 2170 Algorithm Analysis

PROGRAM EFFICIENCY & COMPLEXITY ANALYSIS

York University. AP/ITEC Section M INTRODUCTION TO DATA STRUCTURES Winter Midterm Test

Sorting and Searching

Divide and Conquer Algorithms

CPSC 311: Analysis of Algorithms (Honors) Exam 1 October 11, 2002

Sorting & Growth of Functions

Algorithmic Analysis and Sorting, Part Two

Week - 03 Lecture - 18 Recursion. For the last lecture of this week, we will look at recursive functions. (Refer Slide Time: 00:05)

Quick Sort. CSE Data Structures May 15, 2002

Algorithm Analysis. Spring Semester 2007 Programming and Data Structure 1

1 The smallest free number

A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 3 Parallel Prefix, Pack, and Sorting

CS240 Fall Mike Lam, Professor. Quick Sort

Lecture 2: Getting Started

Lecture #2. 1 Overview. 2 Worst-Case Analysis vs. Average Case Analysis. 3 Divide-and-Conquer Design Paradigm. 4 Quicksort. 4.

Data Structures and Algorithms

Practice Problems for the Final

COMP171 Data Structures and Algorithms Fall 2006 Midterm Examination

Merge Sort. Yufei Tao. Department of Computer Science and Engineering Chinese University of Hong Kong

MergeSort, Recurrences, Asymptotic Analysis Scribe: Michael P. Kim Date: September 28, 2016 Edited by Ofir Geri

Transcription:

Programming in Haskell Aug-Nov 2015 LECTURE 11 SEPTEMBER 10, 2015 S P SURESH CHENNAI MATHEMATICAL INSTITUTE

Measuring efficiency

Measuring efficiency Computation is reduction Application of definitions as rewriting rules

Measuring efficiency Computation is reduction Application of definitions as rewriting rules Count the number of reduction steps Running time is T(n) for input size n

Example: Complexity of ++

Example: Complexity of ++ [] ++ y = y (x:xs) ++ y = x:(xs++y)

Example: Complexity of ++ [] ++ y = y (x:xs) ++ y = x:(xs++y) [1,2,3] ++ [4,5,6]

Example: Complexity of ++ [] ++ y = y (x:xs) ++ y = x:(xs++y) [1,2,3] ++ [4,5,6] 1:([2,3] ++ [4,5,6])

Example: Complexity of ++ [] ++ y = y (x:xs) ++ y = x:(xs++y) [1,2,3] ++ [4,5,6] 1:([2,3] ++ [4,5,6]) 1:(2:([3] ++ [4,5,6]))

Example: Complexity of ++ [] ++ y = y (x:xs) ++ y = x:(xs++y) [1,2,3] ++ [4,5,6] 1:([2,3] ++ [4,5,6]) 1:(2:([3] ++ [4,5,6])) 1:(2:(3:([] ++ [4,5,6])))

Example: Complexity of ++ [] ++ y = y (x:xs) ++ y = x:(xs++y) [1,2,3] ++ [4,5,6] 1:([2,3] ++ [4,5,6]) 1:(2:([3] ++ [4,5,6])) 1:(2:(3:([] ++ [4,5,6]))) 1:(2:(3:([4,5,6])))

Example: Complexity of ++ [] ++ y = y (x:xs) ++ y = x:(xs++y) [1,2,3] ++ [4,5,6] 1:([2,3] ++ [4,5,6]) 1:(2:([3] ++ [4,5,6])) 1:(2:(3:([] ++ [4,5,6]))) 1:(2:(3:([4,5,6]))) l1 ++ l2 : use the second rule length l1 times, first rule once, always

Example: elem

Example: elem elem :: Int -> [Int] -> Bool elem i [] = False elem i (x:xs) (i==x) = True otherwise = elem i xs

Example: elem elem :: Int -> [Int] -> Bool elem i [] = False elem i (x:xs) (i==x) = True otherwise = elem i xs elem 3 [4,7,8,9] elem 3 [7,8,9] elem 3 [8,9] elem 3 [9] elem 3 [] False

Example: elem elem :: Int -> [Int] -> Bool elem i [] = False elem i (x:xs) (i==x) = True otherwise = elem i xs elem 3 [4,7,8,9] elem 3 [7,8,9] elem 3 [8,9] elem 3 [9] elem 3 [] False elem 3 [3,7,8,9] True

Example: elem elem :: Int -> [Int] -> Bool elem i [] = False elem i (x:xs) (i==x) = True otherwise = elem i xs elem 3 [4,7,8,9] elem 3 [7,8,9] elem 3 [8,9] elem 3 [9] elem 3 [] False elem 3 [3,7,8,9] True Complexity depends on input size and value

Variation across inputs Worst case complexity Maximum running time over all inputs of size n Pessimistic: may be rare Average case More realistic, but difficult/impossible to compute

Asymptotic complexity

Asymptotic complexity Interested in T(n) in terms of orders of magnitude

Asymptotic complexity Interested in T(n) in terms of orders of magnitude f(n) = O(g(n)) if there is a constant k such that f(n) k g(n) for all n > 0

Asymptotic complexity Interested in T(n) in terms of orders of magnitude f(n) = O(g(n)) if there is a constant k such that f(n) k g(n) for all n > 0 an 2 + bn + c = O(n 2 ) for all a,b,c (take k = a+b+c if a,b,c > 0)

Asymptotic complexity Interested in T(n) in terms of orders of magnitude f(n) = O(g(n)) if there is a constant k such that f(n) k g(n) for all n > 0 an 2 + bn + c = O(n 2 ) for all a,b,c (take k = a+b+c if a,b,c > 0) Ignore constant factors, lower order terms

Asymptotic complexity Interested in T(n) in terms of orders of magnitude f(n) = O(g(n)) if there is a constant k such that f(n) k g(n) for all n > 0 an 2 + bn + c = O(n 2 ) for all a,b,c (take k = a+b+c if a,b,c > 0) Ignore constant factors, lower order terms O(n), O(n log n), O(n k ), O(2 n ),

Asymptotic complexity Complexity of ++ is O(n), where n is the length of the first list Complexity of elem is O(n) Worst case!

Complexity of reverse

Complexity of reverse myreverse :: [a] -> [a] myreverse [] = [] myreverse (x:xs) = (myreverse xs) ++ [x]

Complexity of reverse myreverse :: [a] -> [a] myreverse [] = [] myreverse (x:xs) = (myreverse xs) ++ [x] Analyze directly (like ++), or write a recurrence for T(n)

Complexity of reverse myreverse :: [a] -> [a] myreverse [] = [] myreverse (x:xs) = (myreverse xs) ++ [x] Analyze directly (like ++), or write a recurrence for T(n) T(0) = 1 T(n) = T(n-1) + n

Complexity of reverse myreverse :: [a] -> [a] myreverse [] = [] myreverse (x:xs) = (myreverse xs) ++ [x] Analyze directly (like ++), or write a recurrence for T(n) T(0) = 1 T(n) = T(n-1) + n Solve by expanding the recurrence

Complexity of reverse T(0) = 1 T(n) = T(n-1) + n

Complexity of reverse T(n) = T(n-1) + n T(0) = 1 T(n) = T(n-1) + n

Complexity of reverse T(n) = T(n-1) + n = (T(n-2) + n-1) + n T(0) = 1 T(n) = T(n-1) + n

Complexity of reverse T(n) = T(n-1) + n = (T(n-2) + n-1) + n T(0) = 1 T(n) = T(n-1) + n = (T(n-3) + n-2) + n-1 + n

Complexity of reverse T(n) = T(n-1) + n = (T(n-2) + n-1) + n T(0) = 1 T(n) = T(n-1) + n = (T(n-3) + n-2) + n-1 + n...

Complexity of reverse T(n) = T(n-1) + n = (T(n-2) + n-1) + n T(0) = 1 T(n) = T(n-1) + n = (T(n-3) + n-2) + n-1 + n... = T(0) + 1 + 2 +... + n

Complexity of reverse T(n) = T(n-1) + n = (T(n-2) + n-1) + n T(0) = 1 T(n) = T(n-1) + n = (T(n-3) + n-2) + n-1 + n... = T(0) + 1 + 2 +... + n = 1 + 1 + 2 +... + n = 1 + n(n+1)/2

Complexity of reverse T(n) = T(n-1) + n = (T(n-2) + n-1) + n T(0) = 1 T(n) = T(n-1) + n = (T(n-3) + n-2) + n-1 + n... = T(0) + 1 + 2 +... + n = 1 + 1 + 2 +... + n = 1 + n(n+1)/2 = O(n 2 )

Speeding up reverse

Speeding up reverse Can we do better?

Speeding up reverse Can we do better? Imagine we are reversing a stack of heavy stack of books

Speeding up reverse Can we do better? Imagine we are reversing a stack of heavy stack of books Transfer to a new stack, top to bottom

Speeding up reverse Can we do better? Imagine we are reversing a stack of heavy stack of books Transfer to a new stack, top to bottom New stack is in reverse order!

Speeding up reverse

Speeding up reverse transfer :: [a] -> [a] -> [a] transfer [] l = l transfer (x:xs) l = transfer xs (x:l)

Speeding up reverse transfer :: [a] -> [a] -> [a] transfer [] l = l transfer (x:xs) l = transfer xs (x:l) Input size for transfer l1 l2 is length l1

Speeding up reverse transfer :: [a] -> [a] -> [a] transfer [] l = l transfer (x:xs) l = transfer xs (x:l) Input size for transfer l1 l2 is length l1 Recurrence

Speeding up reverse transfer :: [a] -> [a] -> [a] transfer [] l = l transfer (x:xs) l = transfer xs (x:l) Input size for transfer l1 l2 is length l1 Recurrence T(0) = 1 T(n) = T(n-1) + 1

Speeding up reverse transfer :: [a] -> [a] -> [a] transfer [] l = l transfer (x:xs) l = transfer xs (x:l) Input size for transfer l1 l2 is length l1 Recurrence T(0) = 1 T(n) = T(n-1) + 1 Expanding: T(n) = 1 + 1 + + 1 = O(n)

Speeding up reverse fastreverse :: [a] -> [a] fastreverse l = transfer l [] Complexity is O(n) Need to understand the computational model to achieve efficiency

Summary Measure complexity in Haskell in terms of reduction steps Account for input size and values Usually worst-case complexity Asymptotic complexity Ignore constants, lower order terms T(n) = O(f(n))

Sorting

Sorting Goal is to arrange a list in ascending order

Sorting Goal is to arrange a list in ascending order How would we start a pack of cards?

Sorting Goal is to arrange a list in ascending order How would we start a pack of cards? A single card is sorted

Sorting Goal is to arrange a list in ascending order How would we start a pack of cards? A single card is sorted Put second card before/after first

Sorting Goal is to arrange a list in ascending order How would we start a pack of cards? A single card is sorted Put second card before/after first Insert third, fourth, card in correct place

Sorting Goal is to arrange a list in ascending order How would we start a pack of cards? A single card is sorted Put second card before/after first Insert third, fourth, card in correct place Insertion sort

Insertion sort : insert

Insertion sort : insert Insert an element in a sorted list

Insertion sort : insert Insert an element in a sorted list insert :: Int -> [Int] -> [Int] insert x [] = [x] insert x (y:ys) (x <= y) = x:y:ys otherwise y:(insert x ys)

Insertion sort : insert Insert an element in a sorted list insert :: Int -> [Int] -> [Int] insert x [] = [x] insert x (y:ys) (x <= y) = x:y:ys otherwise y:(insert x ys) Clearly T(n) = O(n)

Insertion sort : isort

Insertion sort : isort isort :: [Int] -> [Int] isort [] = [] isort (x:xs) = insert x (isort xs)

Insertion sort : isort isort :: [Int] -> [Int] isort [] = [] isort (x:xs) = insert x (isort xs) Alternatively

Insertion sort : isort isort :: [Int] -> [Int] isort [] = [] isort (x:xs) = insert x (isort xs) Alternatively isort = foldr insert []

Insertion sort : isort isort :: [Int] -> [Int] isort [] = [] isort (x:xs) = insert x (isort xs) Alternatively isort = foldr insert [] Recurrence

Insertion sort : isort isort :: [Int] -> [Int] isort [] = [] isort (x:xs) = insert x (isort xs) Alternatively isort = foldr insert [] Recurrence T(0) = 1 T(n) = T(n-1) + O(n)

Insertion sort : isort isort :: [Int] -> [Int] isort [] = [] isort (x:xs) = insert x (isort xs) Alternatively isort = foldr insert [] Recurrence T(0) = 1 T(n) = T(n-1) + O(n) Complexity: T(n) = O(n 2 )

A better strategy? Divide list in two equal parts Separately sort left and right half Combine the two sorted halves to get the full list sorted

Combining sorted lists Given two sorted lists l1 and l2, combine into a sorted list l3 Compare first element of l1 and l2 Move it into l3 Repeat until all elements in l1 and l2 are over Merging l1 and l2

Merging two sorted lists 32 74 89 21 55 64

Merging two sorted lists 32 74 89 21 55 64 21

Merging two sorted lists 32 74 89 21 55 64 21 32

Merging two sorted lists 32 74 89 21 55 64 21 32 55

Merging two sorted lists 32 74 89 21 55 64 21 32 55 64

Merging two sorted lists 32 74 89 21 55 64 21 32 55 64 74

Merging two sorted lists 32 74 89 21 55 64 21 32 55 64 74 89

Merge Sort

Merge Sort Sort l!!0 to l!!(n/2-1)

Merge Sort Sort l!!0 to l!!(n/2-1) Sort l!!(n/2) to l!!(n-1)

Merge Sort Sort l!!0 to l!!(n/2-1) Sort l!!(n/2) to l!!(n-1) Merge sorted halves into l'

Merge Sort Sort l!!0 to l!!(n/2-1) Sort l!!(n/2) to l!!(n-1) Merge sorted halves into l' How do we sort the halves?

Merge Sort Sort l!!0 to l!!(n/2-1) Sort l!!(n/2) to l!!(n-1) Merge sorted halves into l' How do we sort the halves? Recursively, using the same strategy!

Merge Sort 43 32 22 78 63 57 91 13

Merge Sort 43 32 22 78 63 57 91 13 43 32 22 78

Merge Sort 43 32 22 78 63 57 91 13 43 32 22 78 63 57 91 13

Merge Sort 43 32 22 78 63 57 91 13 43 32 22 78 63 57 91 13 43 32 22 78

Merge Sort 43 32 22 78 63 57 91 13 43 32 22 78 63 57 91 13 43 32 22 78 63 57 91 13

Merge Sort 43 32 22 78 63 57 91 13 43 32 22 78 63 57 91 13 43 32 22 78 63 57 91 13 43 32

Merge Sort 43 32 22 78 63 57 91 13 43 32 22 78 63 57 91 13 43 32 22 78 63 57 91 13 43 32 22 78

Merge Sort 43 32 22 78 63 57 91 13 43 32 22 78 63 57 91 13 43 32 22 78 63 57 91 13 43 32 22 78 63 57

Merge Sort 43 32 22 78 63 57 91 13 43 32 22 78 63 57 91 13 43 32 22 78 63 57 91 13 43 32 22 78 63 57 91 13

Merge Sort 43 32 22 78 63 57 91 13 43 32 22 78 63 57 91 13 32 43 22 78 63 57 91 13 43 32 22 78 63 57 91 13

Merge Sort 43 32 22 78 63 57 91 13 43 32 22 78 63 57 91 13 32 43 22 78 63 57 91 13 43 32 22 78 63 57 91 13

Merge Sort 43 32 22 78 63 57 91 13 43 32 22 78 63 57 91 13 32 43 22 78 57 63 91 13 43 32 22 78 63 57 91 13

Merge Sort 43 32 22 78 63 57 91 13 43 32 22 78 63 57 91 13 32 43 22 78 57 63 13 91 43 32 22 78 63 57 91 13

Merge Sort 43 32 22 78 63 57 91 13 22 32 43 78 63 57 91 13 32 43 22 78 57 63 13 91 43 32 22 78 63 57 91 13

Merge Sort 43 32 22 78 63 57 91 13 22 32 43 78 13 57 63 91 32 43 22 78 57 63 13 91 43 32 22 78 63 57 91 13

Merge Sort 13 22 32 43 57 63 78 91 22 32 43 78 13 57 63 91 32 43 22 78 57 63 13 91 43 32 22 78 63 57 91 13

Merge sort : merge

Merge sort : merge merge :: [Int] -> [Int] -> [Int} merge [] ys = ys merge xs [] = xs merge (x:xs) (y:ys) x <= y = x:(merge xs (y:ys)) otherwise = y:(merge (x:xs) ys)

Merge sort : merge merge :: [Int] -> [Int] -> [Int} merge [] ys = ys merge xs [] = xs merge (x:xs) (y:ys) x <= y = x:(merge xs (y:ys)) otherwise = y:(merge (x:xs) ys) Each comparison adds one element to output

Merge sort : merge merge :: [Int] -> [Int] -> [Int} merge [] ys = ys merge xs [] = xs merge (x:xs) (y:ys) x <= y = x:(merge xs (y:ys)) otherwise = y:(merge (x:xs) ys) Each comparison adds one element to output T(n) = O(n), where n is sum of lengths of input lists

Merge sort mergesort :: [Int] -> [Int] mergesort [] = [] mergesort [x] = [x] mergesort l = merge (mergesort (front l)) (mergesort (back l)) where front l = take ((length l) `div` 2) l back l = drop ((length l) `div` 2) l

Analysis of Merge Sort T(n): time taken by Merge Sort on input of size n Assume, for simplicity, that n = 2 k T(n) = 2T(n/2) + 2n Two subproblems of size n/2 Splitting the list into front and back takes n steps Merging solutions requires time O(n/2+n/2) = O(n) Solve the recurrence by unwinding

Analysis of Merge Sort

Analysis of Merge Sort T(1) = 1

Analysis of Merge Sort T(1) = 1 T(n) = 2T(n/2) + 2n

Analysis of Merge Sort T(1) = 1 T(n) = 2T(n/2) + 2n = 2 [ 2T(n/4) + n] + 2n = 2 2 T(n/2 2 ) + 4n

Analysis of Merge Sort T(1) = 1 T(n) = 2T(n/2) + 2n = 2 [ 2T(n/4) + n] + 2n = 2 2 T(n/2 2 ) + 4n = 2 2 [ 2T(n/2 3 ) + 2n/2 2 ] + 4n = 2 3 T(n/2 3 ) + 6n

Analysis of Merge Sort T(1) = 1 T(n) = 2T(n/2) + 2n = 2 [ 2T(n/4) + n] + 2n = 2 2 T(n/2 2 ) + 4n = 2 2 [ 2T(n/2 3 ) + 2n/2 2 ] + 4n = 2 3 T(n/2 3 ) + 6n = 2 j T(n/2 j ) + 2jn

Analysis of Merge Sort T(1) = 1 T(n) = 2T(n/2) + 2n = 2 [ 2T(n/4) + n] + 2n = 2 2 T(n/2 2 ) + 4n = 2 2 [ 2T(n/2 3 ) + 2n/2 2 ] + 4n = 2 3 T(n/2 3 ) + 6n = 2 j T(n/2 j ) + 2jn When j = log n, n/2 j = 1, so T(n/2 j ) = 1

Analysis of Merge Sort T(1) = 1 T(n) = 2T(n/2) + 2n = 2 [ 2T(n/4) + n] + 2n = 2 2 T(n/2 2 ) + 4n = 2 2 [ 2T(n/2 3 ) + 2n/2 2 ] + 4n = 2 3 T(n/2 3 ) + 6n = 2 j T(n/2 j ) + 2jn When j = log n, n/2 j = 1, so T(n/2 j ) = 1 T(n) = 2 j T(n/2 j ) + 2jn = 2 log n + 2(log n) n = n + 2n log n = O(n log n)

Avoid merging

Avoid merging Some elements in left half move right and vice versa

Avoid merging Some elements in left half move right and vice versa Can we ensure that everything to the left is smaller than everything to the right?

Avoid merging Some elements in left half move right and vice versa Can we ensure that everything to the left is smaller than everything to the right? Suppose the median value in list is m

Avoid merging Some elements in left half move right and vice versa Can we ensure that everything to the left is smaller than everything to the right? Suppose the median value in list is m Move all values m to left half of list

Avoid merging Some elements in left half move right and vice versa Can we ensure that everything to the left is smaller than everything to the right? Suppose the median value in list is m Move all values m to left half of list Right half has values > m

Avoid merging Some elements in left half move right and vice versa Can we ensure that everything to the left is smaller than everything to the right? Suppose the median value in list is m Move all values m to left half of list Right half has values > m Recursively sort left and right halves

Avoid merging Some elements in left half move right and vice versa Can we ensure that everything to the left is smaller than everything to the right? Suppose the median value in list is m Move all values m to left half of list Right half has values > m Recursively sort left and right halves List is now sorted! No need to merge

Avoid merging

Avoid merging How do we find the median?

Avoid merging How do we find the median? Sort and pick up middle element

Avoid merging How do we find the median? Sort and pick up middle element But our aim is to sort!

Avoid merging How do we find the median? Sort and pick up middle element But our aim is to sort! Instead, pick up some value in list pivot

Avoid merging How do we find the median? Sort and pick up middle element But our aim is to sort! Instead, pick up some value in list pivot Split list with respect to this pivot element

Quicksort

Quicksort Choose a pivot element

Quicksort Choose a pivot element Typically the first value in the list

Quicksort Choose a pivot element Typically the first value in the list Partition list into lower and upper parts with respect to pivot

Quicksort Choose a pivot element Typically the first value in the list Partition list into lower and upper parts with respect to pivot Move pivot between lower and upper partition

Quicksort Choose a pivot element Typically the first value in the list Partition list into lower and upper parts with respect to pivot Move pivot between lower and upper partition Recursively sort the two partitions

Quicksort High level view

Quicksort High level view 43 32 22 78 63 57 91 13

Quicksort High level view 43 32 22 78 63 57 91 13

Quicksort High level view 43 32 22 78 63 57 91 13

Quicksort High level view 13 32 22 43 63 57 91 78

Quicksort High level view 13 22 32 43 57 63 78 91

Quicksort quicksort :: [Int] -> [Int] quicksort [] = [] quicksort (x:xs) = (quicksort lower) ++ [splitter] ++ (quicksort upper) where splitter = x lower = [ y y <- xs, y <= x ] upper = [ y y <- xs, y > x ]

Analysis of Quicksort Worst case Pivot is maximum or minimum One partition is empty Other is size n-1 T(n) = T(n-1) + n = T(n-2) + (n-1) + n = = 1 + 2 + + n = O(n 2 ) Already sorted array is worst case input!

Analysis of Quicksort But Average case is O(n log n) Sorting is a rare example where average case can be computed What does average case mean?

Quicksort: Average case Assume input is a permutation of {1,2,,n} Actual values not important Only relative order matters Each input is equally likely (uniform probability) Calculate running time across all inputs Expected running time can be shown O(n log n)

Summary Sorting is an important starting point for many functions on lists Insertion sort is a natural inductive sort whose complexity is O(n 2 ) Merge sort has complexity O(n log n) Quicksort has worst-case complexity O(n 2 ) but average-case complexity O(n log n)