Amortized Analysis. A Simple Analysis. An Amortized Analysis. Incrementing Binary Numbers. A Simple Analysis

Similar documents
COP 4531 Complexity & Analysis of Data Structures & Algorithms

COMP251: Amortized Analysis

Amortized Analysis. The time required to perform a sequence of (data structure) operations is averaged over all the operations performed.

Graduate Analysis of Algorithms Prof. Karen Daniels

Graduate Algorithms CS F-12 Amortized Analysis

Amortized Analysis. Andreas Klappenecker. [partially based on the slides of Prof. Welch]

CS F-12 Amortized Analysis 1

DATA STRUCTURES. amortized analysis binomial heaps Fibonacci heaps union-find. Lecture slides by Kevin Wayne. Last updated on Apr 8, :13 AM

CS583 Lecture 12. Previously. Jana Kosecka. Amortized/Accounting Analysis Disjoint Sets. Dynamic Programming Greedy Algorithms

16 Greedy Algorithms

Introduction. Introduction: Multi-Pop Stack Example. Multi-Pop Stack Cost (clever) Multi-Pop Stack Cost (naïve)

Today we begin studying how to calculate the total time of a sequence of operations as a whole. (As opposed to each operation individually.

Connectivity. Use DFS or BFS. 03/11/04 Lecture 18 1

Amortization Analysis

CS 561, Lecture Topic: Amortized Analysis. Jared Saia University of New Mexico

CS 561, Lecture Topic: Amortized Analysis. Jared Saia University of New Mexico

CMPS 2200 Fall Amortized Analysis. Carola Wenk. Slides courtesy of Charles Leiserson with changes by Carola Wenk

Let the dynamic table support the operations TABLE-INSERT and TABLE-DELETE It is convenient to use the load factor ( )

Dynamic tables. Amortized Analysis. Example of a dynamic table. Example of a dynamic table. CS Spring 2008

Amortized Analysis and Union-Find , Inge Li Gørtz

Outline for Today. Analyzing data structures over the long term. Why could we construct them in time O(n)? A simple and elegant queue implementation.

Introduction to Algorithms

Amortized Analysis. Prof. Clarkson Fall Today s music: : "Money, Money, Money" by ABBA

Outline for Today. Why could we construct them in time O(n)? Analyzing data structures over the long term.

Self-Adjusting Heaps

Today s Outline. CS 561, Lecture 18. Potential Method. Pseudocode. Dynamic Tables. Jared Saia University of New Mexico

CPS 231 Exam 2 SOLUTIONS

CS Prof. Clarkson Fall 2014

Algorithm Design and Analysis Homework #4

Announcements. Problem Set 3 is due this Tuesday! Midterm graded and will be returned on Friday during tutorial (average 60%)

Amortized Analysis. Ric Glassey

I want my two dollars!

Algorithm Analysis. (Algorithm Analysis ) Data Structures and Programming Spring / 48

Dynamic Arrays and Amortized Analysis

Lecture 9: Amortized Analysis [Fa 13] I want my two dollars!

Dynamic Arrays and Amortized Analysis

Apply to be a Meiklejohn! tinyurl.com/meikapply

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17

COMP 251 Winter 2017 Online quizzes with answers

Homework #5 Algorithms I Spring 2017

CSci 231 Final Review

CSE 548: (Design and) Analysis of Algorithms

Computer Science 62. Bruce/Mawhorter Fall 16. Midterm Examination. October 5, Question Points Score TOTAL 52 SOLUTIONS. Your name (Please print)

DATA STRUCTURES. amortized analysis binomial heaps Fibonacci heaps union-find. Data structures. Appetizer. Appetizer

Lecture 11 Unbounded Arrays

Solutions to Problem Set 3

Disjoint set (Union-Find)

Chapter 2: Complexity Analysis

1 Static-to-Dynamic Transformations

Introduction to Algorithms April 21, 2004 Massachusetts Institute of Technology. Quiz 2 Solutions

CSE373 Fall 2013, Second Midterm Examination November 15, 2013

CSE 332 Spring 2013: Midterm Exam (closed book, closed notes, no calculators)

Activity 13: Amortized Analysis

Thus, it is reasonable to compare binary search trees and binary heaps as is shown in Table 1.

Lesson 2. Instruction set design

CSE 146. Asymptotic Analysis Interview Question of the Day Homework 1 & Project 1 Work Session

CSCI 104 Log Structured Merge Trees. Mark Redekopp

Fundamental mathematical techniques reviewed: Mathematical induction Recursion. Typically taught in courses such as Calculus and Discrete Mathematics.

CS264: Homework #1. Due by midnight on Thursday, January 19, 2017

Algorithm Theory, Winter Term 2015/16 Problem Set 5 - Sample Solution

Advanced Algorithm Homework 4 Results and Solutions

Lists and Arrays. Algorithms and Data Structures. (c) Marcin Sydow. Introduction. Linked Lists. Abstract Data Structure. Amortised Analysis

Questions from the material presented in this lecture

Searching Algorithms/Time Analysis

Scan and Quicksort. 1 Scan. 1.1 Contraction CSE341T 09/20/2017. Lecture 7

Discrete Math: Selected Homework Problems

Recursion Chapter 3.5

Union-Find and Amortization

O(n): printing a list of n items to the screen, looking at each item once.

INFO1x05 Tutorial 6. Exercise 1: Heaps and Priority Queues

Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute. Week 02 Module 06 Lecture - 14 Merge Sort: Analysis

Complexity, General. Standard approach: count the number of primitive operations executed.

UML CS Algorithms Qualifying Exam Fall, 2004 ALGORITHMS QUALIFYING EXAM

Algorithm Analysis. Part I. Tyler Moore. Lecture 3. CSE 3353, SMU, Dallas, TX

Standard ADTs. Lecture 19 CS2110 Summer 2009

introduction to Programming in C Department of Computer Science and Engineering Lecture No. #40 Recursion Linear Recursion

COS 226 Midterm Exam, Spring 2009

Today: Amortized Analysis (examples) Multithreaded Algs.

CS 231 Data Structures and Algorithms Fall Algorithm Analysis Lecture 16 October 10, Prof. Zadia Codabux

The divide-and-conquer paradigm involves three steps at each level of the recursion: Divide the problem into a number of subproblems.

Stacks, Queues (cont d)

Stacks and queues (chapters 6.6, 15.1, 15.5)

UNIT-2 DIVIDE & CONQUER

University of Palestine. Final Exam 2 nd semester 2014/2015 Total Grade: 50

Programming, Data Structures and Algorithms Prof. Hema Murthy Department of Computer Science and Engineering Indian Institute Technology, Madras

Problem Set 4 Solutions

University of Technology

University of Waterloo Department of Electrical and Computer Engineering ECE250 Algorithms and Data Structures Fall 2014

The resulting array is written as row i + 1 in our matrix:

Friday Four Square! 4:15PM, Outside Gates

COMP 161 Lecture Notes 16 Analyzing Search and Sort

University of Waterloo Department of Electrical and Computer Engineering ECE250 Algorithms and Data Structures Fall 2017

CSE373: Data Structures & Algorithms Lecture 9: Priority Queues. Aaron Bauer Winter 2014

UNIVERSITY REGULATIONS

Announcements. Lab Friday, 1-2:30 and 3-4:30 in Boot your laptop and start Forte, if you brought your laptop

Binary Search. Roland Backhouse February 5th, 2001

Week 4 Stacks & Queues

Chapter 3: part 3 Binary Subtraction

Introduction. hashing performs basic operations, such as insertion, better than other ADTs we ve seen so far

Lecture 8 13 March, 2012

Transcription:

Amortized Analysis The problem domains vary widely, so this approach is not tied to any single data structure The goal is to guarantee the average performance of each operation in the worst case Three methods aggregate method accounting method potential method Two problems stack operations, including a multipop binary counter using the increment operation Stack Operations Goal find worst case time, T(n), for n operations the amortized cost is T(n) / n Push and Pop Push(x, S) has complexity O(1) Pop(S) returns popped object, complexity is O(1) Multipop operation complexity is min(s,k) where s is the stack size A Simple Analysis Start with empty stack Stack can be no larger than O(n) So a multipop operation could have complexity O(n) Since there are n operations, an upper bound on complexity is O(n 2 ) Although this is a valid upper bound, it grossly overestimates the upper bound An Amortized Analysis Claim - any sequence of push, pop, multipop has at most worst case complexity of O(n) each object can be popped at most one time for each time it is pushed the number of push operations is O(n) at most so the number of pops, either from pop or multipop, is at most O(n) the overall complexity is O(n) The amortized cost is O(n)/n = O(1) Question - would this still be true if we had a multipush and a multipop? Incrementing Binary Numbers Assume bits A[k-1] to A[0] x k = 1 i= 0 i A[ i]*2 Incrementing starting from 0 The diagram to the right counts the number of bit flips; for the first 16 numbers the total cost is 31; in general, for n increments, what will the complexity be? A Simple Analysis We measure cost as the number of bit flips some operations only flip one bit other operations ripple through the number and flip many bits what is the average cost per operation? A cursory analysis worst case for the increment operation is O(k) a sequence of n increment operations would have worst case behavior of O( n k ) Although this is an upper bound, it is not very tight 1

An Amortized Analysis Not all bits are flipped each iteration A[0] is flipped every iteration; A[1] every other iteration A[2] every fourth, and so forth for i > lg n, the bit A[i] never flips summing all the bit flips we have So it turns out the worst case is bounded by O(n) Therefore O(n) / n is only O(1)! The Accounting Method Different charges are assigned to different operations overcharges result in a credit credits can be used later to help perform other operations whose amortized cost is less than actual this is very different than the aggregate method in which all operations have the same amortized cost Choice of amortized amounts the total amortized cost for any sequence must be an upper bound on the actual costs the total credit assignment must be nonnegative so the amortized amounts selected must never result in a negative credit Stack Operations Operation Actual cost Amortized cost Push 1 2 Pop 1 0 Multipop min(s,k) 0 Rationale since we start with an empty stack, pushes must be done first and this builds up the amortized credit all pops are charged against this credit; there can never be more pops (of either type) than pushes therefore the total amortized cost is O(n) Incrementing a Binary Counter Amortized cost 2 for setting a bit to 1 0 for setting a bit to 0 the credits for any number are the number of 1 bits Analysis of the increment operation the while loop resetting bits is charged against credits only one bit is set in line 6, so the total charge is 2 since the number of 1 s is never negative, the amount of credit is also never negative the total amortized cost for n increments is O(n) The Potential Method Potential is the accumulation of credits it is associated with the entire data structure rather than individual objects; φ(d i ) is a real number associated with the structure D i the amortized cost is the total amortized cost is if we insure that φ(d i ) >= φ(d 0 ) then the potential never becomes negative it is often convenient to let φ(d 0 ) = 0 then it only needs to be shown that all φ(d i ) are nonnegative Stack Operations - 1 The potential function is the size of the stack the total amortized cost of n operations with respect to φ is an upper bound for the actual cost The push operation so = 2 The pop operation the difference in potential for the pop operation is -1 so the amortized cost is 1 + (-1) = 0 2

Stack Operations - 2 The multipop operation where k = min(k,s) so Incrementing a Binary Counter - 1 Potential function - the number of 1s in the count after the i th operation Amortized cost of the i th increment operation suppose that t i bits are reset and one bit is set Total amortized cost Each operation has an amortized cost of O(1) For n operations, the total amortized cost is O(n) Since the potential function meets all of our requirements, the total cost is a valid upper bound since the actual cost is t i + 1, we have Therefore, for n operations, the total cost is O(n) Incrementing a Binary Counter - 2 Suppose the counter is not initially zero there are b 0 initial 1s and after n increments b n 1s But the amortized cost for c are each 2, so Total cost since b 0 < k, if we executed at least n = Ω(k) increment operations, then the total cost is no more than O(n) no matter what the starting value of the counter Dynamic Tables Dynamic tables expand and contract in size according to the amount of data being held the organization of the table is unimportant, for simplicity we will show an array structure the load factor α(t) of a nonempty table equals the number of items in the table divided by the table size an empty table has size 0 and load factor 1 when we try to insert an item into a full table, we will double the size of the table when the load factor is too low we will contract the table Java vectors act in this way (at least for expansion) Initially num(t)=size(t)=0 Lines 1-3 handle insertion into the empty table Lines 10-11 occur for every insert operation Lines 4-9 allow for table expansion The Insert Operation A simple analysis the expansion is most expensive, for the i th insertion with expansion we could have c i = i for n insertions, worst case is O(n) for each and O(n 2 ) total; but this bound can be improved Using the Aggregate Method Table expansion only occurs at powers of 2 The amortized cost for a single operation is 3 3

Using the Accounting Method We set the amortized cost for insert at 3 intuitive, one for inserting itself into current table one for moving itself when the table is expanded one for moving another item than has already been moved once the table is expanded After an expansion to size m for the table there is no available credit filling the available slots requires 1 actual cost a second credit is associated with the item for a copy in the future; a third credit goes to copy an existing item Therefore, the total amortized cost is 3n Using the Potential Method The potential function is immediately after expansion, num[t] = size[t]/2 so the potential is 0 (all credits used for expanion) immediately before expansion, num[t] = size[t] so the potential is num[t] which will handle the copying since num[t] >= size[t]/2, the potential in nonnegative Relationships between Values Allowing for Table Contractions Contracting when half full - not a good strategy suppose we perform I, D, D, I, I, D, D, I, I, etc. after n/2 insertions these operations would alternately expand and contract the table leading to a complexity of O(n 2 ) we do not have enough credits to pay for expansion and contraction alternating so quickly We will expand at 1/2 full but only contract at 1/4 we earn credits going from 1/2 to 1/4 to make contraction possible after the contraction is completed, the load factor is now 1/2 which would allow another contraction Using the Potential Method The potential function is Sample Behavior The potential is 0 after expansion or contraction It builds while the load factor increases towards 1 or decreases towards 1/4 the potential is never negative, so the total amortized cost can be an upper bound when the load factor is 1, the potential is num[t], thus the potential can pay for an expansion when the load factor is 1/4, the potential is also num[t], so it can pay for a contraction 4

Amortized Cost for Insertion If α i-1 >= 1/2, the analysis is like before with an amortized cost of 3 at most If α i < 1/2 and α i-1 < 1/2 then Amortized Cost of Deletion If α i-1 < 1/2 but there is no contraction then If α i-1 < 1/2 and the i th operation causes a contraction then Thus the amortized cost of insert is at most 3 If α i-1 > 1/2 the operation is bounded by a constant (this is exercise 18.4-3) 5