Comparison of x with an entry in the array

Similar documents
PROGRAM EFFICIENCY & COMPLEXITY ANALYSIS

Assignment 1 (concept): Solutions

Introduction to Analysis of Algorithms

What is an algorithm?

Algorithms and Data Structures

Algorithm Analysis. College of Computing & Information Technology King Abdulaziz University. CPCS-204 Data Structures I

Searching Algorithms/Time Analysis

Data structure and algorithm in Python

Algorithm Analysis. (Algorithm Analysis ) Data Structures and Programming Spring / 48

Complexity of Algorithms. Andreas Klappenecker

Measuring algorithm efficiency

UNIT 1 ANALYSIS OF ALGORITHMS

CS 3410 Ch 5 (DS*), 23 (IJP^)

Data Structures and Algorithms Key to Homework 1

Checking for duplicates Maximum density Battling computers and algorithms Barometer Instructions Big O expressions. John Edgar 2

LECTURE 9 Data Structures: A systematic way of organizing and accessing data. --No single data structure works well for ALL purposes.

Armstrong Atlantic State University Engineering Studies MATLAB Marina Sorting Primer

O(n): printing a list of n items to the screen, looking at each item once.

Data Structures and Algorithms Chapter 2

Algorithm Analysis. CENG 707 Data Structures and Algorithms

Algorithm efficiency can be measured in terms of: Time Space Other resources such as processors, network packets, etc.

Algorithmic Analysis. Go go Big O(h)!

Scientific Computing. Algorithm Analysis

Data Structures and Algorithms. Part 2

Sample Exam 1 Questions

CS S-02 Algorithm Analysis 1

Chapter 2: Complexity Analysis

CS126 Final Exam Review

Asymptotic Analysis Spring 2018 Discussion 7: February 27, 2018

CSE 373 Spring Midterm. Friday April 21st

10/5/2016. Comparing Algorithms. Analyzing Code ( worst case ) Example. Analyzing Code. Binary Search. Linear Search

Question And Answer.

Analysis of Algorithm. Chapter 2

EE 368. Week 6 (Notes)

Complexity Analysis of an Algorithm

[ 11.2, 11.3, 11.4] Analysis of Algorithms. Complexity of Algorithms. 400 lecture note # Overview

Test 1 SOLUTIONS. June 10, Answer each question in the space provided or on the back of a page with an indication of where to find the answer.

Complexity of Algorithms

Outline and Reading. Analysis of Algorithms 1

CS 6402 DESIGN AND ANALYSIS OF ALGORITHMS QUESTION BANK

Algorithm must complete after a finite number of instructions have been executed. Each step must be clearly defined, having only one interpretation.

CSE 332 Winter 2015: Midterm Exam (closed book, closed notes, no calculators)

CSCI 104 Runtime Complexity. Mark Redekopp David Kempe

COT 5407: Introduction to Algorithms. Giri Narasimhan. ECS 254A; Phone: x3748

Notes slides from before lecture. CSE 21, Winter 2017, Section A00. Lecture 3 Notes. Class URL:

Introduction to the Analysis of Algorithms. Algorithm

Choice of C++ as Language

Visualizing Data Structures. Dan Petrisko

Time Analysis of Sorting and Searching Algorithms

Same idea, different implementation. Is it important?

CS 231 Data Structures and Algorithms Fall Algorithm Analysis Lecture 16 October 10, Prof. Zadia Codabux

Algorithms. 1. At a high level of abstraction, the algorithm for doing laundry is:

Chapter 3:- Divide and Conquer. Compiled By:- Sanjay Patel Assistant Professor, SVBIT.

CS/ENGRD 2110 Object-Oriented Programming and Data Structures Spring 2012 Thorsten Joachims. Lecture 10: Asymptotic Complexity and

Part 1: Multiple Choice Circle only ONE answer. Each multiple choice question is worth 3.5 marks.

Analysis of Algorithms. Unit 4 - Analysis of well known Algorithms

Data Structures and Algorithms

Elementary maths for GMT. Algorithm analysis Part II

Introduction to Data Structure

Sorting Pearson Education, Inc. All rights reserved.

CMPSCI 187: Programming With Data Structures. Lecture 5: Analysis of Algorithms Overview 16 September 2011

Data Structures Question Bank Multiple Choice

Analysis of Algorithms. CS 1037a Topic 13

Chapter 11. Analysis of Algorithms

ENGI 4892: Data Structures Assignment 2 SOLUTIONS

Adam Blank Lecture 2 Winter 2017 CSE 332. Data Structures and Parallelism

Lecture 2: Algorithm Analysis

DATA STRUCTURES AND ALGORITHMS. Asymptotic analysis of the algorithms

CS 261 Data Structures. Big-Oh Analysis: A Review

Analysis of Algorithms

Algorithm Analysis. Algorithm Efficiency Best, Worst, Average Cases Asymptotic Analysis (Big Oh) Space Complexity

You should know the first sum above. The rest will be given if you ever need them. However, you should remember that,, and.

Asymptotic Analysis of Algorithms


Programming II (CS300)

Algorithm Analysis. Spring Semester 2007 Programming and Data Structure 1

What is an Algorithm?

Questions. 6. Suppose we were to define a hash code on strings s by:

Internal Sorting by Comparison

CSE373: Data Structures and Algorithms Lecture 4: Asymptotic Analysis. Aaron Bauer Winter 2014

(Refer Slide Time: 00:50)

CSCI 104 Runtime Complexity. Mark Redekopp David Kempe Sandra Batista

CS:3330 (22c:31) Algorithms

Introduction to Computers and Programming. Today

Algorithm Efficiency & Sorting. Algorithm efficiency Big-O notation Searching algorithms Sorting algorithms

CS 103 Unit 8b Slides

Chapter 2: Algorithm Complexity Analysis. Chaochun Wei Spring 2018

Assertions & Verification & Example Loop Invariants Example Exam Questions

Analysis of Algorithms Chapter 11. Instructor: Scott Kristjanson CMPT 125/125 SFU Burnaby, Fall 2013

Algorithm Analysis and Design

How do we compare algorithms meaningfully? (i.e.) the same algorithm will 1) run at different speeds 2) require different amounts of space

Data Structures and Algorithms CSE 465

Computer Algorithms. Introduction to Algorithm

ITEC2620 Introduction to Data Structures

Sorting & Growth of Functions

INTRODUCTION TO ALGORITHMS

Assertions & Verification Example Exam Questions

Algorithm Efficiency & Sorting. Algorithm efficiency Big-O notation Searching algorithms Sorting algorithms

Algorithmic Complexity

(Refer Slide Time: 1:27)

Transcription:

1. Basic operations in algorithm An algorithm to solve a particular task employs some set of basic operations. When we estimate the amount of work done by an algorithm we usually do not consider all the steps such as e.g. initializing certain variables. Generally, the total number of steps is roughly proportional to the number of the basic operations. Thus, we are concerned mainly with the basic operations - how many times the basic operations have to be performed depending on the size of input. Typical basic operations for some problems are the following: Problem Find x in an array Multiplying two matrices with real entries Sort an array of numbers Traverse a tree Operation Comparison of x with an entry in the array Multiplication of two real numbers Comparison of two array entries plus moving elements in the array Traverse an edge The work done by an algorithm, i.e. its complexity, is determined by the number of the basic operations necessary to solve the problem. Note: The complexity of a program that implements an algorithm is roughly the same, but not exactly the same as the complexity of the algorithm. Here we are talking about algorithms independent on any particular implementation - programming language or computer. 2. Size of input Some algorithms are not dependent on the size of the input - the number of the operations they perform is fixed. Other algorithms depend on the size of the input, and these are the algorithms that might cause problems. Before implementing such an algorithm, we have to be sure that the algorithm will finish the job in reasonable time. What is size of input? We need to choose some reasonable measure of size. Here are some examples: Problem Find x in an array Multiply two matrices Size of input The number of the elements in the array The dimensions of the matrices

Sort an array Traverse a binary tree Solve a system of linear equations The number of elements in the array The number of nodes The number of equations, or the number of the unknowns, or both Counting the number of operations The core of the algorithm analysis: to find out how the number of the basic operations depends on the size of the input. A. In our calculation we will use the following rules to manipulate Big-Oh expressions: Let T1(N) and T2(N) be the number of operations necessary to run two program segments respectively, and let T1 = O(g1(N)), T2 = O(g2(N)). Then: a. T1(N) + T2(N) = max(o(g1(n)), O(g2(N))) where max(o(g1(n)), O(g2(N))) is determined in the following way: If g1(n) = O(g2(N)), then max(o(g1(n)), O(g2(N))) = O(g2(N)) If g2(n) = O(g1(N)), then max(o(g1(n)), O(g2(N))) = O(g1(N)) Example: max(o(n 2 ),O(n 3 )) = O(n 3 ). b. T1(N) * T2(N) = O(g1(N) * g2(n)) B. There are four rules to count the operations: Rule 1: for loops The running time of a for loop is at most the running time of the statements inside the loop times the number of iterations. sum = sum + i;

a. Find the running time of statements when executed only once: The statements in the loop heading have fixed number of operations, hence they have constant running time O(1) when executed only once. The statement in the loop body has fixed number of operations; hence it has a constant running time when executed only once. b. Find how many times each statement is executed. // i = 0; executed only once: O(1) // i < n; n + 1 times O(n) // i++ n times O(n) // total time of the loop heading: // O(1) + O(n) + O(n) = O(n) sum = sum + i; // executed n times, O(n) The loop heading plus the loop body will give: O(n) + O(n) = O(n). Loop running time is: O(n) Hence we can say: If the size of the loop is n (loop variable runs from 0 or some fixed constant, to n) and the body has constant running time (no nested loops) then the time is O(n) Rule 2: Nested loops The total running time is the running time of the inside statements times the product of the sizes of all the loops for( j = 0; j < n; j++) Applying Rule 1 for the nested loop (the 'j' loop) we get O(n) for the body of the outer loop.

The outer loop runs n times, therefore the total time for the nested loops will be O(n) * O(n) = O(n*n) = O(n 2 ). What happens if the inner loop does not start from 0? for( j = i; j < n; j++) Here, the number of the times the inner loop is executed depends on the value of i: i = 0, inner loop runs n times i = 1, inner loop runs (n-1) times i = 2, inner loop runs (n-2) times... i = n - 2, inner loop runs 2 times i = n - 1, inner loop runs 1 (once) Adding the right column, we get: ( 1 + 2 + + n) = n*(n+1)/2 = O(n 2 ) General rule for loops: Running time is the product of the size of the loops times the running time of the body. Example: for( j = 0; j < 2n; j++) We have one operation inside the loops, and the product of the sizes is 2n 2. Hence the running time is O(2n 2 ) = O(n 2 )

Rule 3: Consecutive program fragments The total running time is the maximum of the running time of the individual fragments sum = sum + i; for( j = 0; j < 2n; j++) The first loop runs in O(n) time, the second - O(n 2 ) time, the maximum is O(n 2 ) Rule 4: If statement if Condition else S1; S2; The running time is the maximum of the running times of S1 and S2. 3. Examples 1. Search in an unordered array of elements. for (i = 0; i < n; i++) if (a[i] == x) return 1; return -1; // 1 means succeed // -1 means failure, the element is not found The basic operation in this problem is comparison, so we are interested in how the number of comparisons depends on n. Here we have a loop that runs at most n times:

If the element is not there, the algorithm needs n comparisons. If the element is at the end, we need n comparisons. If the element is somewhere in between, we need less than n comparisons. In the worst case (element not there, or located at the end), we have n comparisons to make. Hence the number of operations is O(n). To find what the average case is, we have to consider the possible cases: element is at the first position: 1 comparison element is at the second position: 2 comparisons etc. We compute the sum of the comparisons for the possible cases and then divide by the number of the cases, assuming that all cases are equally likely to happen: 1 + 2 + + n = n(n+1)/2 Thus in the average case the number of comparisons would be (n+1)/2 = O(n). Generally, the algorithm for finding an element in an unsorted array needs O(n) operations. 2. Search in a table n x m for (i = 0; i < n; i++) for (j = 0; j < m; j++) if (a[i][j] == x) return 1 ; return -1; // 1 means succeed // -1 means failure - the element is not found. Here the inner loop runs at most m times and it is located in another loop that runs at most n times, so in total there would be at most nm operations. Strictly speaking, nm is less than n 2 when m < n.however, for very large values of n and m the difference is negligible, the amount of work done is roughly the same. Thus we can say that the complexity here is O(n 2 ). 3. Finding the greatest element in an array amax = a[0];

for (i = 1; i < n; i++) if (a[i] > amax) amax = a[i]; Here the number of operations is always n-1. The amount of work depends on the size of the input, but does not depend on the particular values. The running time is O(n), we disregard "-1" because the difference for large n is negligible. The complexity here is O(n), we disregard "-1" because the difference for large n is negligible. Learning goals 1. Know how the size of input is measured for certain basic problems, discussed above. 2. Given a program fragment (loop, nested loops, if statements), be able to estimate the number of operations using Big-Oh notation. Exam-like questions and problems with Give an analysis of the running time for the following loops, using the Big-Oh notation: Problem 1 The running time for the operation sum++ is a constant. The loop runs n times, hence the complexity of the loop would be O(n) Problem 2 for( j = 0; j < n; j++)

The running time for the operation sum++ is a constant. The outer loop runs n times, The nested loop also runs n times, hence the complexity would be O(n 2 ) Problem 3 for( j = 0; j < n * n; j++) The running time for the operation sum++ is a constant. The outer loop runs n times, The nested loop runs n * n times, hence the complexity would be O(n 3 ) Problem 4 for( j = 0; j < i; j++) The running time for the operation sum++ is a constant. The outer loop runs n times. For the first execution of the outer loop the inner loop runs only once. For the second execution of the outer loop the inner loop runce twice, for the third execution - three times, etc. Thus the inner loop will be executed 1 + 2 +... + (n-1) + n times. 1 + 2 +... + (n-1) + n = n(n+1) / 2, which gives (n+1) / 2 on average. Thus the total running time would be O(n*(n+1)/2) = O(n*n) = O(n 2 )

In general, when the upper bound of the loop variable depends on some other variable, we consider the largest value of the bound. Problem 5 for( j = 0; j < i*i; j++) for( k = 0; k < j; k++) The running time for the operation sum++ is a constant. The most inner loop runs at most n*n times, the middle loop also runs at most n*n times, and the outer loop runs n times, thus the overall complexity would be O(n 5 ) Problem 6 for( j = 0; j < i*i; j++) if (j % i ==0) for( k = 0; k < j; k++) Compare this problem with Problem 5. Obviously the most inner loop will run less times than in Problem 5, and a refined analysis is possible, we are usually content to neglect the if statement and consider its running time to be O(n 2 ), yielding an overall running time of O(n 5 ) Problem 7

val = 1; for( j = 0; j < n*n; j++) val = val * j; First, we asume that the running time to compute an arithmetic expression without function calls is negligible. Then, we have two consecutive loops with running times O(n) and O(n 2 ). We take the maximum complexity, hence the overall runing time would be O(n 2 ) Problem 8 for( j = 0; j < n*n; j++) compute_val(sum,j); The complexity of the function compute_val(x,y) is given to be O(nlogn) The complexity of the function compute_val(x,y) is given to be O(nlogn) The difference between this problem and the above consists in the presence of a function in the loop body, whose complexity is not a constant - it depends on n and is given to be O(nlogn) The second loop runs n*n times, so its complexity would be O(n 2 * nlogn) = O(n 3 logn). The first loop has less running time - O(n), we take the maximum and conclude that the overall running time would be O(n 3 logn)