asymptotic growth rate or order compare two functions, but ignore constant factors, small inputs

Similar documents
Algorithms and Data Structures

[ 11.2, 11.3, 11.4] Analysis of Algorithms. Complexity of Algorithms. 400 lecture note # Overview

PROGRAM EFFICIENCY & COMPLEXITY ANALYSIS

Algorithm Analysis. (Algorithm Analysis ) Data Structures and Programming Spring / 48

Algorithm Analysis. Part I. Tyler Moore. Lecture 3. CSE 3353, SMU, Dallas, TX

Data Structures Lecture 8

UNIT 1 ANALYSIS OF ALGORITHMS

Chapter 2: Complexity Analysis

CSE373: Data Structures and Algorithms Lecture 4: Asymptotic Analysis. Aaron Bauer Winter 2014

Elementary maths for GMT. Algorithm analysis Part I

Why study algorithms? CS 561, Lecture 1. Today s Outline. Why study algorithms? (II)

CSE 373 APRIL 3 RD ALGORITHM ANALYSIS

Adam Blank Lecture 2 Winter 2017 CSE 332. Data Structures and Parallelism

CS:3330 (22c:31) Algorithms

Outline and Reading. Analysis of Algorithms 1

CS240 Fall Mike Lam, Professor. Algorithm Analysis

Algorithm efficiency can be measured in terms of: Time Space Other resources such as processors, network packets, etc.

Algorithm. Algorithm Analysis. Algorithm. Algorithm. Analyzing Sorting Algorithms (Insertion Sort) Analyzing Algorithms 8/31/2017

10/5/2016. Comparing Algorithms. Analyzing Code ( worst case ) Example. Analyzing Code. Binary Search. Linear Search

Computer Science 210 Data Structures Siena College Fall Topic Notes: Complexity and Asymptotic Analysis

CS/ENGRD 2110 Object-Oriented Programming and Data Structures Spring 2012 Thorsten Joachims. Lecture 10: Asymptotic Complexity and

Introduction to Data Structure

Assignment 1 (concept): Solutions

ASYMPTOTIC COMPLEXITY

Computer Science Approach to problem solving

Algorithms in Systems Engineering IE172. Midterm Review. Dr. Ted Ralphs

CS240 Fall Mike Lam, Professor. Algorithm Analysis

CS 137 Part 7. Big-Oh Notation, Linear Searching and Basic Sorting Algorithms. November 10th, 2017

3. Java - Language Constructs I

Plotting run-time graphically. Plotting run-time graphically. CS241 Algorithmics - week 1 review. Prefix Averages - Algorithm #1

Choice of C++ as Language

4.4 Algorithm Design Technique: Randomization

Complexity of Algorithms

Algorithm Analysis. Performance Factors

Solutions. (a) Claim: A d-ary tree of height h has at most 1 + d +...

CSE 146. Asymptotic Analysis Interview Question of the Day Homework 1 & Project 1 Work Session

RUNNING TIME ANALYSIS. Problem Solving with Computers-II

Intro. Speed V Growth

Analysis of Algorithms

Algorithmic Analysis. Go go Big O(h)!

How fast is an algorithm?

O(n): printing a list of n items to the screen, looking at each item once.

Data Structures and Algorithms. Part 2

Elementary maths for GMT. Algorithm analysis Part II

CSE Winter 2015 Quiz 1 Solutions

Measuring algorithm efficiency

Overview. CSE 101: Design and Analysis of Algorithms Lecture 1

How do we compare algorithms meaningfully? (i.e.) the same algorithm will 1) run at different speeds 2) require different amounts of space

CS S-02 Algorithm Analysis 1

Introduction to the Analysis of Algorithms. Algorithm

Algorithms and Theory of Computation. Lecture 2: Big-O Notation Graph Algorithms

Data Structures and Algorithms Chapter 2

Algorithm Analysis. College of Computing & Information Technology King Abdulaziz University. CPCS-204 Data Structures I

COSC 311: ALGORITHMS HW1: SORTING

CMPSCI 187: Programming With Data Structures. Lecture 5: Analysis of Algorithms Overview 16 September 2011

Algorithm Analysis. Gunnar Gotshalks. AlgAnalysis 1

0.1 Welcome. 0.2 Insertion sort. Jessica Su (some portions copied from CLRS)

Overview of Data. 1. Array 2. Linked List 3. Stack 4. Queue

SEARCHING, SORTING, AND ASYMPTOTIC COMPLEXITY. Lecture 11 CS2110 Spring 2016

Data Structures and Algorithms Key to Homework 1

Algorithm Analysis. Applied Algorithmics COMP526. Algorithm Analysis. Algorithm Analysis via experiments

9/10/2018 Algorithms & Data Structures Analysis of Algorithms. Siyuan Jiang, Sept

CS302 Topic: Algorithm Analysis. Thursday, Sept. 22, 2005

Outline. runtime of programs algorithm efficiency Big-O notation List interface Array lists

Final Exam in Algorithms and Data Structures 1 (1DL210)

The growth of functions. (Chapter 3)

Lecture 5 Sorting Arrays

Big-O-ology 1 CIS 675: Algorithms January 14, 2019

Lecture 5: Running Time Evaluation

COMP Analysis of Algorithms & Data Structures

CS126 Final Exam Review

CSE373: Data Structure & Algorithms Lecture 21: More Comparison Sorting. Aaron Bauer Winter 2014

Basic Data Structures (Version 7) Name:

Chapter 6 INTRODUCTION TO DATA STRUCTURES AND ALGORITHMS

Checking for duplicates Maximum density Battling computers and algorithms Barometer Instructions Big O expressions. John Edgar 2

We will use the following code as an example throughout the next two topics:

Complexity Analysis of an Algorithm

CS 261 Data Structures. Big-Oh Analysis: A Review

Asymptotic Analysis of Algorithms

ASYMPTOTIC COMPLEXITY

Today s Outline. CSE 326: Data Structures Asymptotic Analysis. Analyzing Algorithms. Analyzing Algorithms: Why Bother? Hannah Takes a Break

Introduction to Computer Science

Lecture 2: Algorithm Analysis

LECTURE 9 Data Structures: A systematic way of organizing and accessing data. --No single data structure works well for ALL purposes.

Algorithm Analysis. Algorithm Efficiency Best, Worst, Average Cases Asymptotic Analysis (Big Oh) Space Complexity

Introduction to Computers & Programming

Data Structures Question Bank Multiple Choice

Instructions. Definitions. Name: CMSC 341 Fall Question Points I. /12 II. /30 III. /10 IV. /12 V. /12 VI. /12 VII.

Complexity of Algorithms. Andreas Klappenecker

Algorithms and Data Structures

Review for Midterm Exam

Quiz 1 Solutions. (a) f(n) = n g(n) = log n Circle all that apply: f = O(g) f = Θ(g) f = Ω(g)

Analysis of Algorithms & Big-O. CS16: Introduction to Algorithms & Data Structures Spring 2018

Analysis of Algorithms Part I: Analyzing a pseudo-code

The Complexity of Algorithms (3A) Young Won Lim 4/3/18

Asymptotic Analysis Spring 2018 Discussion 7: February 27, 2018

(Refer Slide Time: 1:27)

Algorithm Analysis. Big Oh

Multiple-choice (35 pt.)

INTRODUCTION. Analysis: Determination of time and space requirements of the algorithm

Transcription:

Big-Oh 1

asymptotic growth rate or order 2 compare two functions, but ignore constant factors, small inputs

asymptotic growth rate or order 2 compare two functions, but ignore constant factors, small inputs example: f(n) = 1 000 000 n 2 ; g(n) = 2 n g grows faster eventually much bigger than f

asymptotic growth rate or order compare two functions, but ignore constant factors, small inputs example: f(n) = 1 000 000 n 2 ; g(n) = 2 n g grows faster eventually much bigger than f 1.5 2 109 goal: predict behavior here 1 0.5 0 ignore behavior here 5 10 15 20 25 30 35 2

asymptotic growth rate or order compare two functions, but ignore constant factors, small inputs example: f(n) = 1 000 000 n 2 ; g(n) = 2 n g grows faster eventually much bigger than f 1.5 2 109 goal: predict behavior here 1 0.5 0 ignore behavior here 5 10 15 20 25 30 35 2

asymptotic growth rate or order compare two functions, but ignore constant factors, small inputs example: f(n) = 1 000 000 n 2 ; g(n) = 2 n g grows faster eventually much bigger than f 1.5 2 109 goal: predict behavior here 1 0.5 0 ignore behavior here 5 10 15 20 25 30 35 2

preview: what functions? 3 example: comparing sorting algorithms runtime = f(size of input) e.g. seconds to sort = f(number of elements in list) e.g. # operations to sort = f(number of elements in list) space = f(size of input) e.g. number of bytes of memory = f(number of elements in list)

theory, not empirical 4 yes, you can make guesses about big-oh behavior from measurements but, no, graphs big-oh comparison what happens further to the right? might not have tested big enough want to write down formula

theory, not empirical yes, you can make guesses about big-oh behavior from measurements but, no, graphs big-oh comparison what happens further to the right? might not have tested big enough want to write down formula example: summing a list of n items: exactly n addition operations assume each one takes k unit of time runtime = f(n) = kn 4

recall: comparing list data structures 5 List benchmark (from intro slides) w/ 100000 elements Data structure Total Insert Search Delete Vector 87.818 0.004 63.202 24.612 s ArrayList 87.192 0.010 62.470 24.712 s LinkedList 263.776 0.006 196.550 67.439 s HashSet 0.029 0.022 0.003 0.004 s TreeSet 0.134 0.110 0.017 0.007 s Vector, sorted 2.642 0.009 0.024 2.609 s

recall: comparing list data structures List benchmark (from intro slides) w/ 100000 elements Data structure Total Insert Search Delete Vector 87.818 0.004 63.202 24.612 s ArrayList 87.192 0.010 62.470 24.712 s LinkedList 263.776 0.006 196.550 67.439 s HashSet 0.029 0.022 0.003 0.004 s TreeSet 0.134 0.110 0.017 0.007 s Vector, sorted 2.642 0.009 0.024 2.609 s some runtimes get really big as size gets large 5

recall: comparing list data structures List benchmark (from intro slides) w/ 100000 elements Data structure Total Insert Search Delete Vector 87.818 0.004 63.202 24.612 s ArrayList 87.192 0.010 62.470 24.712 s LinkedList 263.776 0.006 196.550 67.439 s HashSet 0.029 0.022 0.003 0.004 s TreeSet 0.134 0.110 0.017 0.007 s Vector, sorted 2.642 0.009 0.024 2.609 s others seem to remain manageable 5

recall: comparing list data structures List benchmark (from intro slides) w/ 100000 elements Data structure Total Insert Search Delete Vector 87.818 0.004 63.202 24.612 s ArrayList 87.192 0.010 62.470 24.712 s LinkedList 263.776 0.006 196.550 67.439 s HashSet 0.029 0.022 0.003 0.004 s TreeSet 0.134 0.110 0.017 0.007 s Vector, sorted 2.642 0.009 0.024 2.609 s problem: growth rate of runtimes with list size 5

recall: comparing list data structures List benchmark (from intro slides) w/ 100000 elements Data structure Total Insert Search Delete Vector 87.818 0.004 63.202 24.612 s ArrayList 87.192 0.010 62.470 24.712 s LinkedList 263.776 0.006 196.550 67.439 s HashSet 0.029 0.022 0.003 0.004 s TreeSet 0.134 0.110 0.017 0.007 s Vector, sorted 2.642 0.009 0.024 2.609 s for Vector (unsorted), ArrayList, LinkedList # operations grows like n where n is list size 5

recall: comparing list data structures 5 List benchmark (from intro slides) w/ 100000 elements Data structure Total Insert Search Delete Vector 87.818 0.004 63.202 24.612 s ArrayList 87.192 0.010 62.470 24.712 s LinkedList 263.776 0.006 196.550 67.439 s HashSet 0.029 0.022 0.003 0.004 s TreeSet 0.134 0.110 0.017 0.007 s Vector, sorted 2.642 0.009 0.024 2.609 s for HashSet # operations per search/remove is constant (sort of)

recall: comparing list data structures 5 List benchmark (from intro slides) w/ 100000 elements Data structure Total Insert Search Delete Vector 87.818 0.004 63.202 24.612 s ArrayList 87.192 0.010 62.470 24.712 s LinkedList 263.776 0.006 196.550 67.439 s HashSet 0.029 0.022 0.003 0.004 s TreeSet 0.134 0.110 0.017 0.007 s Vector, sorted 2.642 0.009 0.024 2.609 s for TreeSet, sorted Vector # operations per search grows like log(n) where n is list size

why asymptotic analysis? 6 can my program work when data gets big? website gets thousands of new users? text editor opening 1MB book? 1 GB log file? music player sees 1 000 song collection? 50 000? text search on 100 petabyte copy of the text of the web?

why asymptotic analysis? 6 can my program work when data gets big? website gets thousands of new users? text editor opening 1MB book? 1 GB log file? music player sees 1 000 song collection? 50 000? text search on 100 petabyte copy of the text of the web? if asymptotic analysis says no can find out before implementing algorithm won t be fixed by, e.g., buying a faster CPU

sets of functions 7 define sets of functions based on an example f Ω(f): grow no slower than f ( f ) O(f): grow no faster than f ( f ) Θ(f) = Ω(f) O(f): grow as fast as f ( = f )

sets of functions define sets of functions based on an example f Ω(f): grow no slower than f ( f ) O(f): grow no faster than f ( f ) Θ(f) = Ω(f) O(f): grow as fast as f ( = f ) examples: n 3 Ω(n 2 ) 100n O(n 2 ) 10n 2 + n Θ(n 2 ) ignore constant factor, etc. and 10n 2 + n O(n 2 ) and 10n 2 + n Ω(n 2 ) 7

what are we measuring 8 f(n) = worst case running time n = input size as a positive integer

what are we measuring f(n) = worst case running time n = input size as a positive integer will comapre f to another function g(n) example: f(n) O(g(n)) (or f O(g)) informally: f is big-oh of g example f(n) Ω(g(n)) or (g Ω(g)) informally: f is not big-omega of g 8

what are we measuring f(n) = worst case running time n = input size as a positive integer will comapre f to another function g(n) example: f(n) O(g(n)) (or f O(g)) informally: f is big-oh of g example f(n) Ω(g(n)) or (g Ω(g)) informally: f is not big-omega of g 8

worst case? 9 this class: almost always worst cases intuition: detect if program will ever take forever

worst case? 9 this class: almost always worst cases intuition: detect if program will ever take forever example: iterating through an array until we find a value best case: look at one value, it s the one we want worst case: look at every value, none of them are what we want

worst case? 9 this class: almost always worst cases intuition: detect if program will ever take forever example: iterating through an array until we find a value best case: look at one value, it s the one we want worst case: look at every value, none of them are what we want f(n) is run time of slowest input of size n

formal definitions 10 f(n) O(g(n)): there exists c > 0 and n 0 > 0 such that for all n > n 0, f(n) c g(n)

formal definitions 10 f(n) O(g(n)): there exists c > 0 and n 0 > 0 such that for all n > n 0, f(n) c g(n) f(n) Ω(g(n)): there exists c > 0 and n 0 > 0 such that for all n > n 0, f(n) c g(n)

formal definitions 10 f(n) O(g(n)): there exists c > 0 and n 0 > 0 such that for all n > n 0, f(n) c g(n) f(n) Ω(g(n)): there exists c > 0 and n 0 > 0 such that for all n > n 0, f(n) c g(n) f(n) Θ(g(n)): f(n) O(g(n)) and f(n) Ω(g(n))

formal definition example (1) 11 f(n) O(g(n)) if and only if there exists c > 0 and n 0 > 0 such that f(n) c g(n) for all n > n 0 Is n O(n 2 ):

formal definition example (1) 11 f(n) O(g(n)) if and only if there exists c > 0 and n 0 > 0 such that f(n) c g(n) for all n > n 0 Is n O(n 2 ): choose c = 1, n 0 = 2 for n > 2 = n 0 : n c n 2 = n 2 Yes!

formal definition example (2) 12 f(n) O(g(n)) if and only if there exists c > 0 and n 0 > 0 such that f(n) c g(n) for all n > n 0 Is 10n O(n)?

formal definition example (2) 12 f(n) O(g(n)) if and only if there exists c > 0 and n 0 > 0 such that f(n) c g(n) for all n > n 0 Is 10n O(n)? choose c = 11, n 0 = 2 for n > 2 = n 0 : f(n) = 10n c g(n) = 11n Yes!

formal definition example (2) 12 f(n) O(g(n)) if and only if there exists c > 0 and n 0 > 0 such that f(n) c g(n) for all n > n 0 Is 10n O(n)? choose c = 11, n 0 = 2 for n > 2 = n 0 : f(n) = 10n c g(n) = 11n Yes! don t need to choose smallest possible c

negating formal definitions 13 f O(g): there exists c, n 0 > 0 so for all n > n 0 : f(n) cg(n) f O(g): there does not exist c, n 0 > 0 so for all n > n 0 : f(n) cg(n) for all c, n 0, there exists n > n 0 : f(n) > cg(n)

formal definition example (3) 14 f(n) O(g(n)) if and only if there exists c > 0 and n 0 > 0 such that f(n) c g(n) for all n > n 0 Is n 2 O(n)?

formal definition example (3) 14 f(n) O(g(n)) if and only if there exists c > 0 and n 0 > 0 such that f(n) c g(n) for all n > n 0 Is n 2 O(n)? no consider any c, n 0 > 0 consider n bad = (c + 100)(n 0 + 100) > n 0 n 2 bad = (c + 100) 2 (n 0 + 100) 2 > c(c + 100)(n 0 + 100) = cn bad so can t find c, n 0 that sastisfy definition (i.e. f(n) = n 2 bad c g(n bad ) = cn bad )

formal definition example (3) 14 f(n) O(g(n)) if and only if there exists c > 0 and n 0 > 0 such that f(n) c g(n) for all n > n 0 Is n 2 O(n)? no consider any c, n 0 > 0 consider n bad = (c + 100)(n 0 + 100) > n 0 n 2 bad = (c + 100) 2 (n 0 + 100) 2 > c(c + 100)(n 0 + 100) = cn bad so can t find c, n 0 that sastisfy definition (i.e. f(n) = n 2 bad c g(n bad ) = cn bad ) alternative n bad = max{c + 100, n 0 + 1} > n 0

formal definition example (4) 15 f(n) O(g(n)) if and only if there exists c > 0 and n 0 > 0 such that f(n) c g(n) for all n > n 0 consider: f(n) = 100 n 2 + n, g(n) = n 2 : choose c = 200, n 0 = 2 observe for n > 2: 100n 2 + n 101n 2 for n > 2 = n 0 : f(n) = 100n 2 + n 101n 2 c g(n) = 200n 2

big-oh proofs generally 16 if proving yes case: look at inequality choose a large enough c and n 0 that it s definitely true don t bother finding smallest c, n 0 that work if proving no case: game: given c, n 0 find counter example general idea: choose n > n 0 using a formula based on c show that this n never satisfies the inequality don t bother showing it s true for all n > n don t bother finding smallest n that works

aside: forall/exists 17 n > 0: for all n > 0 n < 0: there exists an n < 0

definition consequences If f O(h) and g O(h), which are true? 1. m > 0, f(m) < g(m) for all m, f is less than g 2. m > 0, f(m) < g(m) there exists an m, so f is less than g 3. m 0 > 0, m > m 0, f(m) < g(m) there exists an m 0, so for all m larger, f is less than g 4. 1 and 2 5. 2 and 3 6. 1 and 2 and 3 18

definition consequences If f O(h) and g O(h), which are true? 1. m > 0, f(m) < g(m) for all m, f is less than g 2. m > 0, f(m) < g(m) there exists an m, so f is less than g 3. m 0 > 0, m > m 0, f(m) < g(m) there exists an m 0, so for all m larger, f is less than g 4. 1 and 2 5. 2 and 3 6. 1 and 2 and 3 19

20 f O(h), g O(h) m.f(m) < g(m) counterexample f(n) = 5n; g(n) = n 3 ; h(n) = n 2 f O(h): 5n cn 2 for all n > n 0 with c = 6, n 0 = 2 g O(h): n 3 cn 2? use n cn 0 as counterexample m = 2: f(m) = 10 g(m) = 8

20 f O(h), g O(h) m.f(m) < g(m) counterexample f(n) = 5n; g(n) = n 3 ; h(n) = n 2 f O(h): 5n cn 2 for all n > n 0 with c = 6, n 0 = 2 g O(h): n 3 cn 2? use n cn 0 as counterexample m = 2: f(m) = 10 g(m) = 8 intuition: big-oh ignores behavior for small n

n 3 O(n 2 ) 21 big-oh definition requires: choose any c > 1 and n 0 > 1, then contradicting the definition (and for c < 1, use n = n 0 + 1, etc.) n 3 cn 2 for all n > n 0 n = cn 0 is a counterexample n 3 = c 3 n 3 0 = cn 0 (cn 0 ) 2 > cn 2

22 f O(h), g O(h) = m.f(m) < g(m) intuition: should be true for big enough m assume definition of big-oh: f O(h): n > n 0 : f(n) ch(n) (for a n 0, c > 0) g O(h): n > n 0 : g(n) > ch(n) (for any n 0, c > 0) assume f s n 0, c use the n that must exist for g (from definition)

23 f O(h), g O(h) =? m 0 m > m 0.f(m) < g(m) intuitively, seems so g must grow faster than f for big m: f(m) < c 1 h(m) g(m) < c 2 h(m) but some corner case counterexamples: f(n) = n 1 n odd g(n) = n 2 n even h(n) = n true with additional restriction: f, g monotonic (g(n) g(n + 1), etc.)

function hierarchy 24 n + log(n) 3n 2 + n 100n 2 + n 1.9 n 2.5 5n 3 + n 2 1 4 n

function hierarchy 24 O(n 2 ) n + log(n) 3n 2 + n 100n 2 + n 1.9 n 2.5 5n 3 + n 2 1 4 n O upper bound ( )

function hierarchy 24 Ω(n 2 ) n + log(n) 3n 2 + n 100n 2 + n 1.9 n 2.5 5n 3 + n 2 1 4 n Ω lower bound ( )

function hierarchy 24 O(n 2 ) Ω(n 2 ) n + log(n) 3n 2 + n 100n 2 + n 1.9 n 2.5 5n 3 + n 2 1 4 n O and Ω overlap

function hierarchy 24 O(n 2 ) Θ(n 2 ) Ω(n 2 ) n + log(n) 3n 2 + n 100n 2 + n 1.9 n 2.5 5n 3 + n 2 1 4 n Θ tight bound ( = ) O and Ω

function hierarchy 24 Θ(n 2 ) n + log(n) 3n 2 + n 100n 2 + n 1.9 n 2.5 5n 3 + n 2 1 4 n Θ tight bound ( = ) O and Ω

function hierarchy O(n 2 ) Ω(n 2 ) n + log(n) o(n 2 ) 3n 2 + n 100n 2 + n 1.9 n 2.5 5n 3 + n 2 1 4 n g o(f) ( little-oh ) strict upper bound f(n) < c g(n) (all c); (versus O(f): f(n) c g(n)) 24

function hierarchy n + log(n) o(n 2 ) 3n 2 + n 100n 2 + n 1.9 n 2.5 5n 3 + n 2 1 4 n g o(f) ( little-oh ) strict upper bound f(n) < c g(n) (all c); (versus O(f): f(n) c g(n)) 24

24 function hierarchy O(n 2 ) Ω(n 2 ) n + log(n) 3n 2 + n ω(n 2 ) 100n 2 + n 1.9 n 2.5 5n 3 + n 2 1 4 n g ω(f) strict lower bound f(n) > c g(n) (all c); (versus Ω(f): f(n) c g(n))

24 function hierarchy n + log(n) 3n 2 + n ω(n 2 ) 100n 2 + n 1.9 n 2.5 5n 3 + n 2 1 4 n g ω(f) strict lower bound f(n) > c g(n) (all c); (versus Ω(f): f(n) c g(n))

big-oh variants 25 O(f) asymptotically less than or equal to f o(f) asymptotically less than f Ω(f) asymptotically greater than or equal to f ω(f) asymptotically greater than f Θ(f) asymptotically equal to f

limit-based definition 26 if only if X < : f O(g) X > 0: f Ω(g) 0 < X < : f Θ(g) X = 0: f o(g) X = (and lim inf): f ω(g) lim sup n f(n) g(n) = X

limit-based definition 26 if only if X < : f O(g) X > 0: f Ω(g) 0 < X < : f Θ(g) X = 0: f o(g) X = (and lim inf): f ω(g) lim sup n f(n) g(n) = X

lim sup? 27 lim sup f(n) limit superior g(n) equal to normal lim if it is defined only care about upper bound e.g. n 2 in f(n) = 1 n odd n even n 2 usually glossed over (including in Bloomfield s/floryan s slides from prior semesters)

some big-oh properties (1) 28 for O and Ω and Θ: O(f + g) = O(max(f, g)) f O(g) and g O(h) = f O(h) also holds for o (little-oh), ω f O(f)

some big-oh properties (2) 29 f O(g) g Ω(f) f Θ(g) g Θ(f) does not hold for O, Ω, etc. Θ is an equivalence relation reflexive, transitive, etc.

a note on = 30 informally, sometimes people write 5n 2 = O(n 2 ) not very precise O is a set of functions

selected asymptotic relationships 31 for k > 0, l > 0, c > 1, ɛ > 0: n k o(c nl ) (polynomial always smaller than exponential) n k o(n k log n) (adding log makes something bigger) log k (n) Θ(log l (n)) (all log bases are the same) n k + cn k 1 Θ(n k ) (only polynomial degree matters)

a note on logs 32 therefore log k (n) = log l(n) log l (k) = c log l(n) Θ(log k (n)) = Θ(log l (n)) so doesn t matter which base of log we mean

some names Θ(1) constant (some fixed maximum) read kth element of array Θ(log n) logarithmic binary search a sorted array Θ(n) linear searching an unsorted array Θ(n log n) log-linear sorting an array by comparing elements Θ(n 2 ) quadratic Θ(n 3 ) cubic Θ(2 n ), Θ(c n ) exponential 33

34

35

36

37

38

big-oh rules of thumb (1) for (int i = 0; i < N; ++i) foo(); runtime Θ(N (runtime of foo)) for (int i = 0; i < N; ++i) for (int j = 0; j < M; ++j) bar(); runtime Θ(N (M runtime of bar)) for (int i = 0; i < N; ++i) for (int j = 0; j < i; ++j) foo(); ( N ) runtime Θ i runtime of foo = Θ(N 2 runtime of foo) i=0 39

big-oh rules of thumb (1) for (int i = 0; i < N; ++i) foo(); runtime Θ(N (runtime of foo)) for (int i = 0; i < N; ++i) for (int j = 0; j < M; ++j) bar(); runtime Θ(N (M runtime of bar)) time to increment i? constant factor ignored by Θ for (int i = 0; i < N; ++i) for (int j = 0; j < i; ++j) foo(); ( N ) runtime Θ i runtime of foo = Θ(N 2 runtime of foo) i=0 39

big-oh rules of thumb (1) for (int i = 0; i < N; ++i) foo(); runtime Θ(N (runtime of foo)) for (int i = 0; i < N; ++i) for (int j = 0; j < M; ++j) bar(); runtime Θ(N (M runtime of bar)) nested loops work inside out find time of inner loop ( foo ) multiply by iterations of outer loop for (int i = 0; i < N; ++i) for (int j = 0; j < i; ++j) foo(); ( N ) runtime Θ i runtime of foo = Θ(N 2 runtime of foo) i=0 39

big-oh rules of thumb (1) for (int i = 0; i < N; ++i) foo(); runtime Θ(N (runtime of foo)) for (int i = 0; i < N; ++i) for (int j = 0; j < M; ++j) bar(); runtime Θ(N (M runtime of bar)) for (int i = 0; i < N; ++i) for (int j = 0; j < i; ++j) foo(); at least N/2 iterations with at least N/2 calls to foo = N/2 N/2 = N 2 /4 also N N = N 2 calls = # calls to foo is Θ(N 2 ) ( N ) runtime Θ i runtime of foo = Θ(N 2 runtime of foo) i=0 39

big-oh rules of thumb (2) 40 foo(); bar(); runtime = runtime of foo + runtime of bar Θ(max{foo runtime, bar runtime}) if (quux()) { foo(); } else { bar(); } runtime runtime of quux + max(runtime of foo, runtime of bar) (max because we measure the worst-case)

Θ(1): constant time 41 constant time (Θ(1) time) runtime does not depend on input accessing an array element linked list insert/delete (at known end) getting a vector s size

is that really constant time 42 is getting vector s size really constant time? vector stores its size, but, for, e.g. N = 2 10000, the size itself is huge our usual assumption: treat sensible integer arithmetic as constant time (anything we d keep in a long or smaller variable in practice?) can do other analysis, but uncommon e.g. bit complexity number of single bit operations

Θ(log n): logarithmic time 43 binary search of sorted array search space cut in half each iteration log 2 N iterations balanced tree search/insert height of tree (somehow) gaurenteed to be Θ(log N)

Θ(n): linear 44 constant # operations/element printing a list search in unsorted array search in linked list doubling the size of a vector unbalanced binary search tree find/insert

Θ(n log n): log-linear 45 fast comparison-based sorting merge sort, heap sort, quicksort if pivot choices are good inserting n elements into a balanced tree

Θ(n 2 ): quadratic 46 slow comparison-based sorting insertion sort, bubble sort, selection sort, quicksort if pivot choices are bad most doubly nested for loops that go up to n

Θ(2 nc ), c 1: exponential 47 n-bit solution; try every 2 n of the possiblities crack a combination lock by trying every possiblity finding the best move in an N N Go game (with Japanese rules) checking satisfiablity of Boolean expression* the Traveling Salesman problem* *known algorithms maybe can do better?

more? 48 Θ(n 3 ) find shortest paths between all pairs of n nodes on a fully-connected graph approx. order 2 n1/3 best known integer factorization algorithm