MPATE-GE 2618: C Programming for Music Technology. Unit 4.2

Similar documents
Algorithm Analysis. Performance Factors

Mergesort again. 1. Split the list into two equal parts

Sorting. Task Description. Selection Sort. Should we worry about speed?

CS 137 Part 8. Merge Sort, Quick Sort, Binary Search. November 20th, 2017

Lecture 7: Searching and Sorting Algorithms

QuickSort

COMP2012H Spring 2014 Dekai Wu. Sorting. (more on sorting algorithms: mergesort, quicksort, heapsort)

Sorting. Sorting in Arrays. SelectionSort. SelectionSort. Binary search works great, but how do we create a sorted array in the first place?

Measuring algorithm efficiency

Sorting. Weiss chapter , 8.6

COMP Data Structures

Overview of Sorting Algorithms

Week - 04 Lecture - 01 Merge Sort. (Refer Slide Time: 00:02)

Sorting (Weiss chapter )

Algorithm Efficiency & Sorting. Algorithm efficiency Big-O notation Searching algorithms Sorting algorithms

About this exam review

Algorithmic Analysis. Go go Big O(h)!

Sorting. Order in the court! sorting 1

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 6. Sorting Algorithms

Unit-2 Divide and conquer 2016

O(n): printing a list of n items to the screen, looking at each item once.

Lecture Notes on Quicksort

Table ADT and Sorting. Algorithm topics continuing (or reviewing?) CS 24 curriculum

MergeSort, Recurrences, Asymptotic Analysis Scribe: Michael P. Kim Date: April 1, 2015

Component 02. Algorithms and programming. Sorting Algorithms and Searching Algorithms. Matthew Robinson

Lecture 7 Quicksort : Principles of Imperative Computation (Spring 2018) Frank Pfenning

Sorting Algorithms. + Analysis of the Sorting Algorithms

Sorting. Order in the court! sorting 1

Cpt S 122 Data Structures. Sorting

Sorting Pearson Education, Inc. All rights reserved.

ALGORITHM ANALYSIS. cs2420 Introduction to Algorithms and Data Structures Spring 2015

The Limits of Sorting Divide-and-Conquer Comparison Sorts II

Mergesort again. 1. Split the list into two equal parts

Search,Sort,Recursion

SEARCHING AND SORTING HINT AT ASYMPTOTIC COMPLEXITY

Computer Science 4U Unit 1. Programming Concepts and Skills Algorithms

Sorting. Sorting. Stable Sorting. In-place Sort. Bubble Sort. Bubble Sort. Selection (Tournament) Heapsort (Smoothsort) Mergesort Quicksort Bogosort

Algorithm efficiency can be measured in terms of: Time Space Other resources such as processors, network packets, etc.

CS 61B Summer 2005 (Porter) Midterm 2 July 21, SOLUTIONS. Do not open until told to begin

Data Types. Operators, Assignment, Output and Return Statements

ECE 2574: Data Structures and Algorithms - Basic Sorting Algorithms. C. L. Wyatt

CSE 373: Data Structures and Algorithms

Algorithm for siftdown(int currentposition) while true (infinite loop) do if the currentposition has NO children then return

CISC 1100: Structures of Computer Science

ITEC2620 Introduction to Data Structures

What is an algorithm? CISC 1100/1400 Structures of Comp. Sci./Discrete Structures Chapter 8 Algorithms. Applications of algorithms

Java How to Program, 9/e. Copyright by Pearson Education, Inc. All Rights Reserved.

Sorting Goodrich, Tamassia Sorting 1

L14 Quicksort and Performance Optimization

Better sorting algorithms (Weiss chapter )

Sorting. CSE 143 Java. Insert for a Sorted List. Insertion Sort. Insertion Sort As A Card Game Operation. CSE143 Au

For searching and sorting algorithms, this is particularly dependent on the number of data elements.

CSE 332 Autumn 2013: Midterm Exam (closed book, closed notes, no calculators)

Exam I Principles of Imperative Computation, Summer 2011 William Lovas. May 27, 2011

algorithm evaluation Performance

Quicksort (Weiss chapter 8.6)

Data structures. More sorting. Dr. Alex Gerdes DIT961 - VT 2018

CSE 373 Final Exam 3/14/06 Sample Solution

Run Times. Efficiency Issues. Run Times cont d. More on O( ) notation

Sorting. CSC 143 Java. Insert for a Sorted List. Insertion Sort. Insertion Sort As A Card Game Operation CSC Picture

Statistics Case Study 2000 M. J. Clancy and M. C. Linn

Faster Sorting Methods

Computers in Engineering COMP 208. Where s Waldo? Linear Search. Searching and Sorting Michael A. Hawker

CSE 373 MAY 24 TH ANALYSIS AND NON- COMPARISON SORTING

University of the Western Cape Department of Computer Science

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING B.E SECOND SEMESTER CS 6202 PROGRAMMING AND DATA STRUCTURES I TWO MARKS UNIT I- 2 MARKS

Topics Recursive Sorting Algorithms Divide and Conquer technique An O(NlogN) Sorting Alg. using a Heap making use of the heap properties STL Sorting F

Quicksort. Repeat the process recursively for the left- and rightsub-blocks.

IS 709/809: Computational Methods in IS Research. Algorithm Analysis (Sorting)

Quick Sort. CSE Data Structures May 15, 2002

Basic Data Structures (Version 7) Name:

17/05/2018. Outline. Outline. Divide and Conquer. Control Abstraction for Divide &Conquer. Outline. Module 2: Divide and Conquer

9. The Disorganized Handyman

Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute

Lecture Notes on Quicksort

CSE373: Data Structures and Algorithms Lecture 4: Asymptotic Analysis. Aaron Bauer Winter 2014

Data Structures Lecture 3 Order Notation and Recursion

CS:3330 (22c:31) Algorithms

MergeSort, Recurrences, Asymptotic Analysis Scribe: Michael P. Kim Date: September 28, 2016 Edited by Ofir Geri

Unit 6 Chapter 15 EXAMPLES OF COMPLEXITY CALCULATION

Lecture 8: Mergesort / Quicksort Steven Skiena

Sorting. CPSC 259: Data Structures and Algorithms for Electrical Engineers. Hassan Khosravi

Divide and Conquer Algorithms: Advanced Sorting

Algorithmic Analysis and Sorting, Part Two

Outline. Quadratic-Time Sorting. Linearithmic-Time Sorting. Conclusion. Bubble/Shaker Sort Insertion Sort Odd-Even Sort

CS301 - Data Structures Glossary By

9/10/12. Outline. Part 5. Computational Complexity (2) Examples. (revisit) Properties of Growth-rate functions(1/3)

Chapter 7 Sorting. Terminology. Selection Sort

Sorting & Searching. Hours: 10. Marks: 16

CS125 : Introduction to Computer Science. Lecture Notes #38 and #39 Quicksort. c 2005, 2003, 2002, 2000 Jason Zych

2/14/13. Outline. Part 5. Computational Complexity (2) Examples. (revisit) Properties of Growth-rate functions(1/3)

Review implementation of Stable Matching Survey of common running times. Turn in completed problem sets. Jan 18, 2019 Sprenkle - CSCI211

Visualizing Data Structures. Dan Petrisko

Principles of Algorithm Analysis. Biostatistics 615/815

PERFORMANCE OF VARIOUS SORTING AND SEARCHING ALGORITHMS Aarushi Madan Aarusi Tuteja Bharti

Computer Science 210 Data Structures Siena College Fall Topic Notes: Searching and Sorting

DIVIDE AND CONQUER ALGORITHMS ANALYSIS WITH RECURRENCE EQUATIONS

CP222 Computer Science II. Searching and Sorting

Computational biology course IST 2015/2016

8/2/10. Looking for something COMP 10 EXPLORING COMPUTER SCIENCE. Where is the book Modern Interiors? Lecture 7 Searching and Sorting TODAY'S OUTLINE

Transcription:

MPATE-GE 2618: C Programming for Music Technology Unit 4.2

Quiz 1 results (out of 25) Mean: 19.9, (standard deviation = 3.9) Equivalent to 79.1% (SD = 15.6) Median: 21.5 High score: 24 Low score: 13

Pointer syntax summary To declare a pointer int *ptr_to_int; To take the address of a variable ptr = &var_name; To get the value stored at the address contained in a pointer value = *ptr; To change the value of the variable reference by a pointer *ptr = value; Warning: Declaring a pointer does not allocate the space to which it points. For example: int *foop; declares a pointer to an integer, but does not allocate space for the integer. If you assign something to *foop, you will either crash your program or get garbage.

Pointers and changing values in functions Recall that you can t change values of arguments passed to functions except in the case of arrays. Let s say we wanted to write a function divide that performed integer division and returned both the result and the remainder. Functions can only return a single value, but we have two values to return. In order to return two values, we might have used globals. Using globals to return values from functions is normally considered bad style. Another problem is that if we return a value, there s no mechanism for reporting an error (divide by 0). A better solution uses pointers. See divide.c

Functions and reference parameters: Summary If you are calculating just one value, a function that returns a value is generally easier to use than a function that uses a reference (i.e. use a pointer to a value). If you are calculating more than one quantity, generally it is best to use a reference parameter for each and not to return one as a function value.

structs and pointers When using a pointer to reference a member of a struct, use the -> operator instead of. (dot). typedef struct Point { int x; int y; } Point; Point p; Point *pptr = &p; pptr->x = 3; // equivalent to p.x = 3; pptr->y = 4; // equivalent to p.y = 4;

Efficiency So far we haven t given much thought to how efficiently our programs ran. Sometimes it s OK to worry only about getting a program correct and not worry about making it efficient (fast). Usually, you want to at least think about what inefficiencies there might be in a program. In your assignments, you should begin thinking about how efficient your algorithms are.

Example: Searching Say we re trying to find the location of a value in an array of 10 unsorted numbers. On average, approximately how many comparisons will you have to make before you find it? What is the greatest number of comparisons you could possibly make? Let s generalize now: say we re trying to find a value in an array of n numbers. On average, approximately how many comparisons will you have to make to find the location? What is the greatest number of comparisons you could possibly make? When we express the efficiency of an algorithm, we can think about both the average running time and also the worst case running time. We normally express the problem size in terms of the number of elements (n)in the problem (in this case, the number of values in an array). We then express the efficiency as a function of n. In this problem, the expected search time is what?

Algorithmic efficiency It is often easier to compute worst-case performance (compared to expected performance). In sorting, we often use the number of comparisons as a measure of the efficiency (sometimes called complexity). We use a system called big-o notation to express an algorithm s efficiency.

Definition of Big-O Assume that an algorithm solves a problem of size n, in c * n 2 operations for some constant c, then the time complexity of that algorithm is said to be O(n 2 ) (pronounced order n-squared). More formally: a function g(n) is said to be O(f(n)) if there exists a constant c such that g(n) <= c * f(n) for all but some finite set of non-negative values for n. Intuitively, an algorithm that is O(f(n)) runs in a time that is proportional to f(n).

Big-O and efficiency Going back to our algorithm to search for a number, both the expected time (n/2) and the worst time (n) are O(n) since n/2 can be expressed as (1/2) * n. In our formula above, f(n) = n and c = ½. Which algorithm is more efficient? An algorithm that runs in 6n steps or 2n 2 steps? What is the big-o notation for the algorithms in the last question? Your friend is trying to sort some data and has two algorithms. He runs the algorithms on two sets of data, one with 10 elements and another with 10,000 elements. On the first (smaller) set, algorithm #1 is faster. On the second set (larger), algorithm #2 is faster. He is very confused. Can you explain how this might be the case?

In general An algorithm requiring constant time not dependent on the number of elements is O(1). An array lookup is an O(1) procedure. An O(log n) or logarithmic algorithm commonly uses a divide-and-conquer type approach. Binary search is a good example. An algorithm you can do with a single for-loop is usually O(n). Many examples of this: e.g. initializing an array, scanning an array for an element. An O(n log n) algorithm, also known as linearithmic or loglinear, solves a problem by breaking it up into smaller sub-problems, solving them independently, and then combining solutions. Better sorting algorithms have this sort of complexity. Computing an FFT is also in this category. An algorithm requiring a doubly-nested for-loop is usually O(n 2 ). Practical for only relatively small problems. An algorithm requiring a triply-nested for-loop is usually O(n 3 ). O(c n ) or exponential algorithms are not practical, arising naturally in bruteforce solution to problems where you try every possibility until you get the solution. Example: looking ahead in chess by considering all possible moves.

Running times

Binary search: recursive pseudocode On input array, first, last, and k, define recurse as: If first > last then return false. Let middle = (first + last) / 2. Else if k < array[middle] then return recurse(array, first, middle 1, k). Else if k > array[middle] then return recurse(array, middle + 1, last, k). Else return true. Running time: log 2 n

Selection sort N.B. this algorithm is implemented for you in helper.c in Problem Set 3. e basic algorithm: 1. Find the smallest element, put it at the beginning of the array. 2. Find the next smallest element, put it in the next slot. 3. Continue until all the entries are sorted. 3 2 5 7 4 2 3 5 7 4 2 3 5 7 4 2 3 4 7 5 2 3 4 5 7 Original position After first step After second step After next step After next step

Selection Sort analysis What is the worst case running time of this algorithm? ere would be n swaps. More importantly there are approximately So the running time is O(n 2 ). 2 n 2 compares. What is the expected case running time? e same as above, O(n 2 ).

Sorting We re going to embark upon a survey of different sorting techniques now. Some important things to focus on: e method e data structure used Efficiency of the algorithm (worst case and expected running times) Advantages/disadvantages of one algorithm over another What makes the sort go to worst case running time

Bubble sort Selection sort uses the observation that you can sort a list of elements by repeatedly finding the smallest element. Another possible observation: If you look at any two adjacent items in the array, the first one will be smaller than the second one if the list is sorted. If you examined every pair of two elements and guaranteed that this was true, the list would be sorted. How can we turn this into an algorithm for sorting? while the list is not sorted examine each pair of two adjacent elements if the first element is greater than the second, swap them. How will we know when the list is sorted?

Bubble sort, continued Here s our C code: void bubblesort(int array[], int numitems) { int i, nswaps; do { nswaps = 0; /* We will loop through the array comparing each pair * of items. Since we will always look ahead one (we * will compare the current element to the next element) * we want to stop our our loop one earlier than we normally * would to loop through an array. */ for (i = 0; i < numitems 1; i++) { if (array[i] > array[i + 1]) { swapelements(array, i, i + 1); nswaps++; } } } while (nswaps!= 0); }

3 2 5 7 4 Bubble sort, step by step Original position Make comparison (do swap) 2 3 5 7 4 2 3 5 4 7 2 3 5 4 7 2 3 4 5 7 2 3 4 5 7 3 2 5 7 4 After first swap second compare (no swap) third compare (no swap) fourth compare (do swap) After second swap End of first pass through list nswap!= 0; keep looping first compare (no swap) second compare (no swap) third compare (do swap) After swap fourth compare (no swap) End of second pass through list nswap!= 0; keep looping first compare (no swap) second compare (no swap) third compare (no swap) fourth compare (no swap) End of first pass through list nswap == 0; done

Bubble sort analysis Average running time is O(n 2 ). Large elements at the beginning of the list do not pose a problem, as they are quickly swapped. Small elements towards the end, however, move to the beginning extremely slowly. In general, bubble sort is highly inefficient, even among the O(n 2 ) algorithms like selection sort and insertion sort.

Merge Sort Like binary search, merge sort uses a divide-and-conquer approach. Basic observations: If we have two sorted lists, we can create a single sorted list by merging the two lists. Merging is an O(n) operation. If our task is to sort one large list, we can decompose the problem into that of sorting two smaller lists and merging them. If we repeated divide a list in half and sort it and merge it, each element will be touched O(log n) times. Like binary search, this algorithm lends itself well to recursion. What would the base case be for this algorithm?

Merge sort algorithm Here s the algorithm: 1. If the size of the list is less than or equal to 1, return. 2. Split the list in half. 3. Sort the first half of the list. 4. Sort the second half of the list. 5. Merge the two lists.

Merge sort 4 2 6 8 1 3 7 5 splits log 2 n times 4 2 6 8 1 3 7 5 2 4 6 8 1 3 5 7 2 4 6 8 1 3 5 7 1 2 3 4 5 6 7 8 For each split, all n of the elements are touched when merging

Merge sort: analysis Best, worst, and expected times are all O(n log n). Even if the list is sorted, it does not save us much. e partitioning of the set is done for convenience of implementation, not for efficiency. Leading question: how could we partition the list to improve efficiency rather than convenience? We would like the same basic algorithm as mergesort: Divide the list in two Sort each half What we might like to improve upon, given the merge sort algorithm: Skip the merge phase. Partition the set based on a value rather than just on physical location. Avoid the use of temporary arrays (less memory usage). e solution is called quicksort (sometimes qsort).

Quicksort algorithm If the number of elements in the array is less than of equal to 1, there is nothing to sort. Return. Pick a value in the array. Picking good values improves the robustness of quicksort. It becomes more difficult to make it go quadratic. For now, we will just pick the first element. Now organize the array into three areas: All the items equal to value e items less than value e items greater than value Move the items equal to value between those less than and those greater than. Sort the items less than value Sort the items greater than value. Pick a value 8 Divide into three areas 8 equal 8 less than 8 greater than 8 Put equal area in between other areas 8 less than 8 equal 8 greater than 8

values that are known to be equal to split/key value less-than split/key value greater-than split/key value compare to split/key value values just swapped Quicksort algorithm key = 42 (first val) E L G 42 30 55 42 94 66 18 44 67 E L G 42 30 55 42 94 66 18 44 67 E L G 42 30 67 42 94 66 18 44 55 E L G 42 30 67 42 94 66 18 44 55 E L G 42 30 44 42 94 66 18 67 55 E L G 42 30 44 42 94 66 18 67 55 E L G 42 30 18 42 94 66 44 67 55 E L G 42 30 18 42 94 66 44 67 55

Quicksort algorithm E L G 42 30 18 42 94 66 44 67 55 E L G 42 30 18 42 94 66 44 67 55 E L G 42 42 18 30 94 66 44 67 55 E L G 42 42 18 30 94 66 44 67 55 E L G 42 42 18 30 66 94 44 67 55 E L G 42 42 18 30 66 94 44 67 55 E G L 42 42 18 30 66 94 44 67 55 E L G 42 42 18 30 66 94 44 67 55

Quicksort algorithm P E L 42 42 18 30 66 94 44 67 55 P E L G 30 42 18 42 66 94 44 67 55 P E L G 30 42 18 42 66 94 44 67 55 P E L G G 30 18 42 42 66 94 44 67 55 E L P G 30 18 42 42 66 94 44 67 55 Recurse Recurse

Quicksort analysis Usually faster than merge sort. Expected running time is O(n log n). Worst case running time is O(n 2 ). Handles duplicates efficiently Ways to improve efficiency Pick first number more carefully. Randomly select a value Calculate the median of the first, middle, and last elements. Calculate the median of the medians of three random elements (pick 9 elements, compute the median of each set of three and select the median of that one). is is what the commercial qsort does. Move the greater_than pointer from the end towards the middle until it finds an element that is less than the split value instead of automatically swapping.