The Assignment Problem: Exploring Parallelism
|
|
- Rosemary Greene
- 5 years ago
- Views:
Transcription
1 The Assignment Problem: Exploring Parallelism Timothy J Rolfe Department of Computer Science Eastern Washington University 319F Computing & Engineering Bldg. Cheney, Washington USA Timothy.Rolfe@mail.ewu.edu Abstract: The linear assignment problem requires the determination of an optimal permutation vector for the assignment of tasks to agents. Even the backtracking implementation supports a rather powerful bounding function. Since the processing of permutation families (based on low-subscripted vector assignments) can be done independently of each other, one may examine parallel processing strategies, and discovery of instances in which the parallel execution is a very bad idea. Because of article size limitations, this article discusses only backtracking in parallel. A later article will address branch-and-bound in parallel. Categories and Subject Descriptors: F.2.2 Nonnumerical Algorithms and Problems Computations on discrete structures; G.2.1 [Discrete Mathematics] Combinatorics (permutations and combinations); G.2.3 [Discrete Mathematics] Applications General Terms: Algorithms, Performance Keywords: Parallel Processing, Backtracking 1. INTRODUCTION The Assignment Problem provides a useful basis for exploring parallel solution of problems since it appears to provide an embarrassingly parallel problem, one in which one may work independently on subproblems since the subproblems do not depend on each other. Specifically, the Linear Assignment Problem [1] may be briefly described thus (either as a minimization problem or a maximization problem): given a set of n agents and a set of n tasks, find the optimal assignment of agents to tasks subject to the matrix that gives the result of assigning each agent and each task, in which each agent has a single task and each task has a single agent. As a minimization problem, the matrix would be the cost matrix, giving the cost of each agent [j] and task [k] assignment, and one wishes the minimum cost assignment. As a maximization problem, the matrix would be the benefit matrix, and one wishes the maximum benefit assignment. Since the two forms are equivalent, this discussion is framed in terms of the maximization problem with a benefit matrix. [Except for permute, all code segments below can be viewed in the context of their full programs through the following URL: 2. BOUNDED BACKTRACKING [2] The pure brute-force solution would be to examine all n! permutations and find a maximal benefit permutation since it is possible that the permutations may not all have unique benefits. As the basis for discussion, here is one possible permutation generation algorithm. [3] void permute (int index, int n, int perm[]) { if (index == n) // All cells assigned { process(perm, n); else { // March available values thru [index] int k, hold; for (k = index; k < n; k++) { swap (perm, index, k); // Put bounding logic here if some // partial permutations can be // immediately discarded. permute (index+1, n, perm); // Above code right rotates by one, // so perform the left rotation. hold = perm[index]; for (k = index+1; k < n; k++) perm[k-1] = perm[k]; perm[n-1] = hold; We can now start pruning the decision tree. First, it is possible to examine the two easiest permutations and obtain a lower bound on the solution: simply make the assignments consistent with traversing the benefit matrix in a diagonal (row = col) or anti diagonal (row+col = n 1) fashion. Each of these is a valid solution to the problem, inroads SIGCSE Bulletin Volume 41, Number June
2 and the higher-benefit solution provides a lower bound for all subsequent work. Once there is a lower bound on the solution, backtracking allows early pruning of the decision tree within the logic that generates the permutations. As noted in the above permute() code, one can insert bounding logic into the permutation generation actually make the recursive call permute(perm, index+1, n) only if the partial permutation [0..index] can be the basis for a successful complete solution. Thus one can compute the benefit for just these assignments. They will be the fixed portion in permutations built with this basis. It is then possible to find an upper bound with this prefix without completely solving the problem examine the unassigned cells and get the largest possible value simply by examining the column maxima, looking at rows in the range [index+1..n 1], and columns perm[index+1] through perm[n 1]. In terms of the following code segment (which assumes a global benefit matrix), this would be obtained by maxadditional = colmaxsum(index+1, perm). int colmaxsum(int start, int perm[]) { int sum = 0, k; for (k = start; k < size; k++) { int columnmaximum = benefit[start][perm[k]], row; for (row = start+1; row < size; row++) { if (columnmaximum < benefit[row][perm[k]]) columnmaximum = benefit[row][perm[k]]; sum += columnmaximum; return sum; Thus, any partial permutation that has fixed perm[0] through perm[index], no complete permutation can be exceed the total of benefit[k][perm[k]] for k from 0 to index plus the above colmaxsum, where start is index+1. The initial permutation comes from the diagonal or anti-diagonal permutation mentioned above, and even with this basis some partial permutations can be discarded. As calculation proceeds, however, the processing of completed permutations allows successive refinement of the maximal benefit permutation, and each of these raises the bar on the lower bound used in discarding partial permutations. Parallel Execution If one has access to a multi-processor / multi-core computer running a Unix variant, the Unix fork() function provides easy access to parallel execution. At the point at which the statement child = fork() is executed, another process, the child process, is created and receives zero from the function, while the original process receives back the process ID of the child process. The child process is created in exactly the same state as the parent process except for the value returned by fork. For all practical purposes it has a copy of all of all variables in the parent in its own space (that is, there is no shared memory), and also shares any files opened for output. Each process can then proceed to process a subproblem in the over-all problem, and then save results in a shared output file. In the context of the assignment problem, one can generate a small number of processes for ranges of values in perm[0]. Each can find the optimal permutation flowing from its initial states and save that state in a shared file. // Break the problem into nproc pieces: hi = size / nprocs; for (proc = 1; proc < nprocs;) { Child = fork(); if ( Child!= 0 ) break; lo = hi; hi = size * ++proc / nprocs; // Explore each option for this set for (k = lo; k < hi; k++) { swap(vect, 0, k); // [1..n-1] done sequentially explore(1, vect, benefit[0][vect[0]]); swap(vect, k, 0); // undo swap This provides a chain of parent/child processes, each working on different initial permutations. The use of a shared output file for communication is discussed in the article Bargain-Basement Parallelism. [4] Note, however, that each process develops its own lower limit so that there is significantly less decision tree pruning than in the sequential program. [Note: in both the above C code and in the following Java code, the permute method above becomes the explore method. The problem size is a global variable, so that the arguments to explore are the index being assigned, the permutation vector, and finally the total benefit for the portion of the permutation already fixed.] On the other hand, within Java one can use separate Java threads to compute ranges of initial permutations. In addition, Java provides a shared-memory environment so that one does not need to send messages through files. One simply needs to ensure that the update of the optimal permutation and its benefit can only be executed by one thread at a time that it be a synchronized method. An additional bonus is that the global lowerlimit can be shared by all threads, so that they can prune their decision trees based on solutions discovered by other threads. The generation of the threads and their execution is simple. static void threadrun(int NThreads) { int thrd, lo = 0, hi; inroads SIGCSE Bulletin Volume 41, Number June
3 for (thrd = 0; thrd < nthreads; thrd++) { hi = size * (thrd+1) / nthreads; engine[thrd] = new Compute (lo, hi, thrd); lo = hi; try { for (thrd=0; thrd<nthreads; thrd++) engine[thrd].start(); for (thrd=0; thrd<nthreads; thrd++) engine[thrd].join(); catch (Exception e) { e.printstacktrace(); // Inner class accesses outer class methods // and data. class Compute extends Thread { int start, finish,// Index range position; // Thread ID. int[] vect; // Working solution Compute(int lo, int hi, int thread) { start = lo; finish = hi; position = thread; // Working permutation from the solution // Insure that ALL threads start the same vect = (int[]) solution.clone(); public void run() { for (int k = start; k < finish; k++) { swap(vect, 0, k); explore(1, vect, benefit[0][vect[0]]); swap(vect, 0, k); The assignment of tasks to processors is handled for the programmer by the Unix operating system in the fork example and by the Java Virtual Machine in the Java thread example. Alternative: Thread Self-Scheduling Rather than working on a block of states, the compute engines can work one initial state at a time. They get their work from a synchronized method that doles out the individual permutations referred to as self-scheduling. Each compute engine then processes several permutations before terminating. The method void threadrun(int []perm, int NThread) would initialize from perm the global vector pending, holding the present state of the permutation, set the global variable knext to 0, and start the requested number of threads without sending them the permutation. Instead, each thread will invoke a boolean getproblem method. // Insure that only one thread at a time // accesses this. Return a boolean false // when all permutations have been // distributed. // These are the global variables: // int pending[], knext; synchronized boolean getproblem(int []work) { if (knext >= size) return false; swap(pending, 0, knext++); System.arraycopy(pending, 0, work, 0, size); return true; The individual threads will invoke this outer class method to get the permutations they work on, and to receive the signal that they can terminate // Inner class accesses outer class methods // and data. class Compute extends Thread { public Compute() { ; // Nothing to do public void run() { int []perm = new int[size]; while (getproblem(perm)) explore(1, perm, benefit[0][perm[0]]); 3. EXPERIMENTAL RESULTS The programs discussed above were run on three different random data sets, one involving a 30x30 random benefit grid and two involving 32x32 random benefit grids, where the permutations involve values in the ranges [0..29] and [0..31] respectively. The 30x30 grid has the highest-value permutation as one occurring about a fifth of the way through the permutations (the first three terms are 5, 23, and 4). One of the 32x32 grids has the highest-value permutation very soon (the first three terms are 1, 4, and 30) while the other has its solution nearly 80% through the permutations (the first three terms are 24, 9, and 14). The programs were run under Ubuntu s Linux version smp on quad-processor hyperthreading computers with Intel Xeon CPUs. Because of the hyperthreading these appear to Linux to be eight CPUs running at 2.80GHz. Programs were run in their pure sequential form and then in their parallel form with two through eight processes cooperating in the solution. The observed behavior of the parallel execution in C based on the Unix fork makes clear that there are dependencies among the calculation of different permutation vectors: in the sequential calculation the lower limit for solutions is updated throughout the calculation. In the following table the wall-clock time required for solutions is shown for the pure sequential program and then inroads SIGCSE Bulletin Volume 41, Number June
4 for the two-process through eight-process parallel executions. Unix C Fork Implementation Sequential processes processes processes processes processes processes processes In multiplying the number of processes dealing with permutation vectors, the decision tree pruning is based on the highest-value permutation within each set, and (as is usual for parallel processing) the total time required comes from the slowest process. Because the three data sets are generated randomly, the permutation values are also randomly distributed. It would appear that in the 32x32a data set the seven-way split generates a subproblem with very little decision tree pruning. The Java threaded implementations, however, do not have the same problem as the C fork one: each thread has access to the shared static variable (lowerlimit) that drives the decision tree pruning so that the high-valued permutation discovered by one thread provides pruning for all other threads as well. The first approach discussed was that in which all threads process approximately the same number of permutations, very similar to the C fork version but with Java threads and the shared memory. We do observe speed-up rather than slow-down in this case. Java Threads Equal Number of Permutations Sequential processes processes processes processes processes processes processes Given the random distribution of permutation values, there is also a random distribution of speed-up ratios. In general, though, it appears that adding processes tends to speed things up. One caveat: it is the hyperthreading that makes the four processors appear to be eight. In pure compute-bound benchmarks the speed-up ratios from one to four processes are better than the ratios from five to eight processes In the self-scheduling approach, the threads do not necessarily process the same number of permutations, but instead process individual permutations until all have been computed. This allows for a random case in which at the end a single thread is processing the last permutation set, again delaying completion. Java Threads Self-Scheduling Permutations Sequential processes processes processes processes processes processes processes From these limited experimental data, neither of the Java thread implementations is clearly better than the other. 4. BRANCH-AND-BOUND PREVIEW The preferred method for solving this problem is through Branch-and-Bound. That will be covered in a subsequent paper. The power of the best-fit-first approach can be seen in the following experimental results: Java Threads Branch-and-Bound Version Sequential processes processes processes processes processes processes processes SUMMARY In this case study, the linear assignment problem is brought into parallel solution by means of backtracking. In the process the C fork approach revealed that there is dependence among the individual solutions in the extent to which decision tree pruning is possible, a problem that does not affect the Java threads approach. There will be a second paper that continues the case study with the solution of this problem through the preferred branch-and-bound algorithmic strategy. 6. WEB RESOURCE This page provides access to this paper and to the subsequent paper on Branch-and-Bound. It also provides inroads SIGCSE Bulletin Volume 41, Number June
5 access to the programs discussed above and an Excel workbook giving the results of the numerical experiments that gave the above tables. ACKNOWLEDGEMENTS These results were obtained using equipment within the Computer Science Department at Eastern Washington University. REFERENCES [1] Gilles Brassard and Paul Bratley, Fundamentals of Algorithmics (Prentice-Hall, Inc., 1996), pp See also Anany Levitin, Introduction to the Design & Analysis of Algorithms (2 nd ed.; Pearson Education, Inc., 2007), pp. 116, 118. [2] The complete chapter on backtracking from Sartaj Sahni s Data Structures, Algorithms, and Applications in Java (Silicon Press, 2004) is at [3] Timothy Rolfe, Backtracking Algorithms, Dr. Dobb s Journal, Vol. 29, No. 5 (May 2004), pp. 48, Text of the article is available through Source code is available through ftp:// /sourcecode /ddj/2004/0405.zip [4] Timothy Rolfe, Bargain-Basement Parallelism, Dr. Dobb s Journal, Vol. 28, No. 2 (February 2003), pp. 46, 48, 50. Text of the article is available through Source code is available through ftp:// /sourcecode/ddj/2003/0302.zip Computing Curricula Overview Report < > Computer Engineering < Computer Science <computer.org/education/cc2001/> Information Systems Information Technology Software Engineering inroads SIGCSE Bulletin Volume 41, Number June
Backtracking. Chapter 5
1 Backtracking Chapter 5 2 Objectives Describe the backtrack programming technique Determine when the backtracking technique is an appropriate approach to solving a problem Define a state space tree for
More informationSearch and Optimization
Search and Optimization Search, Optimization and Game-Playing The goal is to find one or more optimal or sub-optimal solutions in a given search space. We can either be interested in finding any one solution
More informationEECE.4810/EECE.5730: Operating Systems Spring 2017 Homework 2 Solution
1. (15 points) A system with two dual-core processors has four processors available for scheduling. A CPU-intensive application (e.g., a program that spends most of its time on computation, not I/O or
More informationThe Operating System. Chapter 6
The Operating System Machine Level Chapter 6 1 Contemporary Multilevel Machines A six-level l computer. The support method for each level is indicated below it.2 Operating System Machine a) Operating System
More informationMassively Parallel Approximation Algorithms for the Knapsack Problem
Massively Parallel Approximation Algorithms for the Knapsack Problem Zhenkuang He Rochester Institute of Technology Department of Computer Science zxh3909@g.rit.edu Committee: Chair: Prof. Alan Kaminsky
More informationSAT-CNF Is N P-complete
SAT-CNF Is N P-complete Rod Howell Kansas State University November 9, 2000 The purpose of this paper is to give a detailed presentation of an N P- completeness proof using the definition of N P given
More informationUMCS. The N queens problem - new variants of the Wirth algorithm. Marcin Łajtar 1. ul. Nadbystrzycka 36b, Lublin, Poland
Annales Informatica AI XIII, 1 (2013) 53 61 DOI: 10.2478/v10065-012-0013-3 The N queens problem - new variants of the Wirth algorithm Marcin Łajtar 1 1 Institute of Computer Science, Lublin University
More informationChapter 4. Message-passing Model
Chapter 4 Message-Passing Programming Message-passing Model 2 1 Characteristics of Processes Number is specified at start-up time Remains constant throughout the execution of program All execute same program
More informationLecture 13: Two- Dimensional Arrays
Lecture 13: Two- Dimensional Arrays Building Java Programs: A Back to Basics Approach by Stuart Reges and Marty Stepp Copyright (c) Pearson 2013. All rights reserved. Nested Loops Nested loops nested loop:
More informationProcess a program in execution; process execution must progress in sequential fashion. Operating Systems
Process Concept An operating system executes a variety of programs: Batch system jobs Time-shared systems user programs or tasks 1 Textbook uses the terms job and process almost interchangeably Process
More informationCS 167 Final Exam Solutions
CS 167 Final Exam Solutions Spring 2016 Do all questions. 1. The implementation given of thread_switch in class is as follows: void thread_switch() { thread_t NextThread, OldCurrent; } NextThread = dequeue(runqueue);
More informationBacktracking is a refinement of the brute force approach, which systematically searches for a
Backtracking Backtracking is a refinement of the brute force approach, which systematically searches for a solution to a problem among all available options. It does so by assuming that the solutions are
More informationLecture 6: Recursion RECURSION
Lecture 6: Recursion RECURSION You are now Java experts! 2 This was almost all the Java that we will teach you in this course Will see a few last things in the remainder of class Now will begin focusing
More informationChapter 3: Process-Concept. Operating System Concepts 8 th Edition,
Chapter 3: Process-Concept, Silberschatz, Galvin and Gagne 2009 Chapter 3: Process-Concept Process Concept Process Scheduling Operations on Processes Interprocess Communication 3.2 Silberschatz, Galvin
More informationChapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition
Chapter 7: Main Memory Operating System Concepts Essentials 8 th Edition Silberschatz, Galvin and Gagne 2011 Chapter 7: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure
More information3 INTEGER LINEAR PROGRAMMING
3 INTEGER LINEAR PROGRAMMING PROBLEM DEFINITION Integer linear programming problem (ILP) of the decision variables x 1,..,x n : (ILP) subject to minimize c x j j n j= 1 a ij x j x j 0 x j integer n j=
More informationChapter 5: Processes & Process Concept. Objectives. Process Concept Process Scheduling Operations on Processes. Communication in Client-Server Systems
Chapter 5: Processes Chapter 5: Processes & Threads Process Concept Process Scheduling Operations on Processes Interprocess Communication Communication in Client-Server Systems, Silberschatz, Galvin and
More informationJOURNAL OF OBJECT TECHNOLOGY
JOURNAL OF OBJECT TECHNOLOGY Online at http://www.jot.fm. Published by ETH Zurich, Chair of Software Engineering JOT, 2005 Vol. 4, No. 1, January-February 2005 A Java Implementation of the Branch and Bound
More informationUsing Templates to Introduce Time Efficiency Analysis in an Algorithms Course
Using Templates to Introduce Time Efficiency Analysis in an Algorithms Course Irena Pevac Department of Computer Science Central Connecticut State University, New Britain, CT, USA Abstract: We propose
More informationBacktracking. See Section 7.7 p of Weiss
Backtracking See Section 7.7 p 333-336 of Weiss So far we have looked at recursion in general, looked at Dynamic Programming as a way to make redundant recursions less inefficient, and noticed that recursion
More information6. Algorithm Design Techniques
6. Algorithm Design Techniques 6. Algorithm Design Techniques 6.1 Greedy algorithms 6.2 Divide and conquer 6.3 Dynamic Programming 6.4 Randomized Algorithms 6.5 Backtracking Algorithms Malek Mouhoub, CS340
More informationProcesses. Operating System Concepts with Java. 4.1 Sana a University, Dr aimen
Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Sana a University, Dr aimen Process Concept
More informationUNIT 4 Branch and Bound
UNIT 4 Branch and Bound General method: Branch and Bound is another method to systematically search a solution space. Just like backtracking, we will use bounding functions to avoid generating subtrees
More informationCHAPTER 8 - MEMORY MANAGEMENT STRATEGIES
CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES OBJECTIVES Detailed description of various ways of organizing memory hardware Various memory-management techniques, including paging and segmentation To provide
More informationAC64/AT64 DESIGN & ANALYSIS OF ALGORITHMS DEC 2014
AC64/AT64 DESIGN & ANALYSIS OF ALGORITHMS DEC 214 Q.2 a. Design an algorithm for computing gcd (m,n) using Euclid s algorithm. Apply the algorithm to find gcd (31415, 14142). ALGORITHM Euclid(m, n) //Computes
More informationJOB SHOP SCHEDULING WITH UNIT LENGTH TASKS
JOB SHOP SCHEDULING WITH UNIT LENGTH TASKS MEIKE AKVELD AND RAPHAEL BERNHARD Abstract. In this paper, we consider a class of scheduling problems that are among the fundamental optimization problems in
More information34. Recursion. Java. Summer 2008 Instructor: Dr. Masoud Yaghini
34. Recursion Java Summer 2008 Instructor: Dr. Masoud Yaghini Outline Introduction Example: Factorials Example: Fibonacci Numbers Recursion vs. Iteration References Introduction Introduction Recursion
More informationRGB Digital Image Forgery Detection Using Singular Value Decomposition and One Dimensional Cellular Automata
RGB Digital Image Forgery Detection Using Singular Value Decomposition and One Dimensional Cellular Automata Ahmad Pahlavan Tafti Mohammad V. Malakooti Department of Computer Engineering IAU, UAE Branch
More informationPart Two - Process Management. Chapter 3: Processes
Part Two - Process Management Chapter 3: Processes Chapter 3: Processes 3.1 Process Concept 3.2 Process Scheduling 3.3 Operations on Processes 3.4 Interprocess Communication 3.5 Examples of IPC Systems
More informationMemory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts
Memory management Last modified: 26.04.2016 1 Contents Background Logical and physical address spaces; address binding Overlaying, swapping Contiguous Memory Allocation Segmentation Paging Structure of
More information5.12 EXERCISES Exercises 263
5.12 Exercises 263 5.12 EXERCISES 5.1. If it s defined, the OPENMP macro is a decimal int. Write a program that prints its value. What is the significance of the value? 5.2. Download omp trap 1.c from
More informationChapter 3 Processes. Process Concept. Process Concept. Process Concept (Cont.) Process Concept (Cont.) Process Concept (Cont.)
Process Concept Chapter 3 Processes Computers can do several activities at a time Executing user programs, reading from disks writing to a printer, etc. In multiprogramming: CPU switches from program to
More informationCS 61B Asymptotic Analysis Fall 2018
CS 6B Asymptotic Analysis Fall 08 Disclaimer: This discussion worksheet is fairly long and is not designed to be finished in a single section Some of these questions of the level that you might see on
More informationSorting: Given a list A with n elements possessing a total order, return a list with the same elements in non-decreasing order.
Sorting The sorting problem is defined as follows: Sorting: Given a list A with n elements possessing a total order, return a list with the same elements in non-decreasing order. Remember that total order
More informationCS307: Operating Systems
CS307: Operating Systems Chentao Wu 吴晨涛 Associate Professor Dept. of Computer Science and Engineering Shanghai Jiao Tong University SEIEE Building 3-513 wuct@cs.sjtu.edu.cn Download Lectures ftp://public.sjtu.edu.cn
More informationCS 112 Introduction to Computing II. Wayne Snyder Computer Science Department Boston University
9/5/6 CS Introduction to Computing II Wayne Snyder Department Boston University Today: Arrays (D and D) Methods Program structure Fields vs local variables Next time: Program structure continued: Classes
More informationSerial. Parallel. CIT 668: System Architecture 2/14/2011. Topics. Serial and Parallel Computation. Parallel Computing
CIT 668: System Architecture Parallel Computing Topics 1. What is Parallel Computing? 2. Why use Parallel Computing? 3. Types of Parallelism 4. Amdahl s Law 5. Flynn s Taxonomy of Parallel Computers 6.
More informationParallel Merge Sort with Double Merging
Parallel with Double Merging Ahmet Uyar Department of Computer Engineering Meliksah University, Kayseri, Turkey auyar@meliksah.edu.tr Abstract ing is one of the fundamental problems in computer science.
More informationChapter 8: Memory-Management Strategies
Chapter 8: Memory-Management Strategies Chapter 8: Memory Management Strategies Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel 32 and
More informationMID TERM MEGA FILE SOLVED BY VU HELPER Which one of the following statement is NOT correct.
MID TERM MEGA FILE SOLVED BY VU HELPER Which one of the following statement is NOT correct. In linked list the elements are necessarily to be contiguous In linked list the elements may locate at far positions
More informationOn the Relationships between Zero Forcing Numbers and Certain Graph Coverings
On the Relationships between Zero Forcing Numbers and Certain Graph Coverings Fatemeh Alinaghipour Taklimi, Shaun Fallat 1,, Karen Meagher 2 Department of Mathematics and Statistics, University of Regina,
More informationPace University. Fundamental Concepts of CS121 1
Pace University Fundamental Concepts of CS121 1 Dr. Lixin Tao http://csis.pace.edu/~lixin Computer Science Department Pace University October 12, 2005 This document complements my tutorial Introduction
More informationChapter-6 Backtracking
Chapter-6 Backtracking 6.1 Background Suppose, if you have to make a series of decisions, among various choices, where you don t have enough information to know what to choose. Each decision leads to a
More informationDiagram of Process State Process Control Block (PCB)
The Big Picture So Far Chapter 4: Processes HW Abstraction Processor Memory IO devices File system Distributed systems Example OS Services Process management, protection, synchronization Memory Protection,
More informationMCS-378 Intraterm Exam 1 Serial #:
MCS-378 Intraterm Exam 1 Serial #: This exam is closed-book and mostly closed-notes. You may, however, use a single 8 1/2 by 11 sheet of paper with hand-written notes for reference. (Both sides of the
More informationPerformance Tuning VTune Performance Analyzer
Performance Tuning VTune Performance Analyzer Paul Petersen, Intel Sept 9, 2005 Copyright 2005 Intel Corporation Performance Tuning Overview Methodology Benchmarking Timing VTune Counter Monitor Call Graph
More information4.1 Performance. Running Time. The Challenge. Scientific Method. Reasons to Analyze Algorithms. Algorithmic Successes
Running Time 4.1 Performance As soon as an Analytic Engine exists, it will necessarily guide the future course of the science. Whenever any result is sought by its aid, the question will arise by what
More informationOptimizing the use of the Hard Disk in MapReduce Frameworks for Multi-core Architectures*
Optimizing the use of the Hard Disk in MapReduce Frameworks for Multi-core Architectures* Tharso Ferreira 1, Antonio Espinosa 1, Juan Carlos Moure 2 and Porfidio Hernández 2 Computer Architecture and Operating
More informationMultithreaded Parallelism and Performance Measures
Multithreaded Parallelism and Performance Measures Marc Moreno Maza University of Western Ontario, London, Ontario (Canada) CS 3101 (Moreno Maza) Multithreaded Parallelism and Performance Measures CS 3101
More informationParallelizing SAT Solver With specific application on solving Sudoku Puzzles
6.338 Applied Parallel Computing Final Report Parallelizing SAT Solver With specific application on solving Sudoku Puzzles Hank Huang May 13, 2009 This project was focused on parallelizing a SAT solver
More informationFunctions. Lecture 6 COP 3014 Spring February 11, 2018
Functions Lecture 6 COP 3014 Spring 2018 February 11, 2018 Functions A function is a reusable portion of a program, sometimes called a procedure or subroutine. Like a mini-program (or subprogram) in its
More informationChapter 8: Main Memory. Operating System Concepts 9 th Edition
Chapter 8: Main Memory Silberschatz, Galvin and Gagne 2013 Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel
More informationProbabilistic (Randomized) algorithms
Probabilistic (Randomized) algorithms Idea: Build algorithms using a random element so as gain improved performance. For some cases, improved performance is very dramatic, moving from intractable to tractable.
More informationwww.thestudycampus.com Recursion Recursion is a process for solving problems by subdividing a larger problem into smaller cases of the problem itself and then solving the smaller, more trivial parts. Recursion
More informationUniversity of Waterloo Department of Electrical and Computer Engineering ECE 250 Algorithms and Data Structures
University of Waterloo Department of Electrical and Computer Engineering ECE 250 Algorithms and Data Structures Final Examination (17 pages) Instructor: Douglas Harder April 14, 2004 9:00-12:00 Name (last,
More informationChapter 3: Processes
Chapter 3: Processes Silberschatz, Galvin and Gagne 2013 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication 3.2 Silberschatz, Galvin and Gagne 2013
More informationNew parallel algorithms for finding determinants of NxN matrices
New parallel algorithms for finding determinants of NxN matrices Sami Almalki, Saeed Alzahrani, Abdullatif Alabdullatif College of Computer and Information Sciences King Saud University Riyadh, Saudi Arabia
More informationAn Introduction to Parallel Programming
An Introduction to Parallel Programming Ing. Andrea Marongiu (a.marongiu@unibo.it) Includes slides from Multicore Programming Primer course at Massachusetts Institute of Technology (MIT) by Prof. SamanAmarasinghe
More information4.1 Performance. Running Time. The Challenge. Scientific Method
Running Time 4.1 Performance As soon as an Analytic Engine exists, it will necessarily guide the future course of the science. Whenever any result is sought by its aid, the question will arise by what
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY Computer Systems Engineering: Spring Quiz I Solutions
Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.033 Computer Systems Engineering: Spring 2011 Quiz I Solutions There are 10 questions and 12 pages in this
More informationChapter 4: Processes. Process Concept
Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Silberschatz, Galvin and Gagne
More informationThreads CS1372. Lecture 13. CS1372 Threads Fall / 10
Threads CS1372 Lecture 13 CS1372 Threads Fall 2008 1 / 10 Threads 1 In order to implement concurrent algorithms, such as the parallel bubble sort discussed previously, we need some way to say that we want
More informationChapter 3: Processes. Operating System Concepts Essentials 2 nd Edition
Chapter 3: Processes Silberschatz, Galvin and Gagne 2013 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication
More informationChapter 8: Main Memory
Chapter 8: Main Memory Silberschatz, Galvin and Gagne 2013 Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel
More informationChapter 8: Main Memory
Chapter 8: Main Memory Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel 32 and 64-bit Architectures Example:
More informationChapter 3: Processes. Chapter 3: Processes. Process in Memory. Process Concept. Process State. Diagram of Process State
Chapter 3: Processes Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 3.2 Silberschatz,
More informationChapter 4: Processes
Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Silberschatz, Galvin and Gagne
More informationOut-of-Order Parallel Simulation of SystemC Models. G. Liu, T. Schmidt, R. Dömer (CECS) A. Dingankar, D. Kirkpatrick (Intel Corp.)
Out-of-Order Simulation of s using Intel MIC Architecture G. Liu, T. Schmidt, R. Dömer (CECS) A. Dingankar, D. Kirkpatrick (Intel Corp.) Speaker: Rainer Dömer doemer@uci.edu Center for Embedded Computer
More informationWeb page recommendation using a stochastic process model
Data Mining VII: Data, Text and Web Mining and their Business Applications 233 Web page recommendation using a stochastic process model B. J. Park 1, W. Choi 1 & S. H. Noh 2 1 Computer Science Department,
More informationProject and Production Management Prof. Arun Kanda Department of Mechanical Engineering Indian Institute of Technology, Delhi
Project and Production Management Prof. Arun Kanda Department of Mechanical Engineering Indian Institute of Technology, Delhi Lecture - 8 Consistency and Redundancy in Project networks In today s lecture
More informationChapter 6 Backtracking Algorithms. Backtracking algorithms Branch-and-bound algorithms
Chapter 6 Backtracking Algorithms Backtracking algorithms Branch-and-bound algorithms 1 Backtracking Algorithm The task is to determine algorithm for finding solutions to specific problems not by following
More informationProcess Concept. Chapter 4: Processes. Diagram of Process State. Process State. Process Control Block (PCB) Process Control Block (PCB)
Chapter 4: Processes Process Concept Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems An operating system
More informationChapter 4: Processes
Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Process Concept An operating
More informationProcesses and Threads. Processes and Threads. Processes (2) Processes (1)
Processes and Threads (Topic 2-1) 2 홍성수 Processes and Threads Question: What is a process and why is it useful? Why? With many things happening at once in a system, need some way of separating them all
More informationRunning Time. Analytic Engine. Charles Babbage (1864) how many times do you have to turn the crank?
4.1 Performance Introduction to Programming in Java: An Interdisciplinary Approach Robert Sedgewick and Kevin Wayne Copyright 2002 2010 3/30/11 8:32 PM Running Time As soon as an Analytic Engine exists,
More informationimplementing the breadth-first search algorithm implementing the depth-first search algorithm
Graph Traversals 1 Graph Traversals representing graphs adjacency matrices and adjacency lists 2 Implementing the Breadth-First and Depth-First Search Algorithms implementing the breadth-first search algorithm
More informationJava How to Program, 10/e. Copyright by Pearson Education, Inc. All Rights Reserved.
Java How to Program, 10/e Copyright 1992-2015 by Pearson Education, Inc. All Rights Reserved. Data structures Collections of related data items. Discussed in depth in Chapters 16 21. Array objects Data
More informationAnswer any FIVE questions 5 x 10 = 50. Graph traversal algorithms process all the vertices of a graph in a systematic fashion.
PES Institute of Technology, Bangalore South Campus (Hosur Road, 1KM before Electronic City, Bangalore 560 100) Solution Set Test III Subject & Code: Design and Analysis of Algorithms(10MCA44) Name of
More informationThe Big Picture So Far. Chapter 4: Processes
The Big Picture So Far HW Abstraction Processor Memory IO devices File system Distributed systems Example OS Services Process management, protection, synchronization Memory Protection, management, VM Interrupt
More informationParallelization of Graph Isomorphism using OpenMP
Parallelization of Graph Isomorphism using OpenMP Vijaya Balpande Research Scholar GHRCE, Nagpur Priyadarshini J L College of Engineering, Nagpur ABSTRACT Advancement in computer architecture leads to
More informationChapter 17 Parallel Work Queues
Chapter 17 Parallel Work Queues Part I. Preliminaries Part II. Tightly Coupled Multicore Chapter 6. Parallel Loops Chapter 7. Parallel Loop Schedules Chapter 8. Parallel Reduction Chapter 9. Reduction
More informationUnderstanding The Behavior of Simultaneous Multithreaded and Multiprocessor Architectures
Understanding The Behavior of Simultaneous Multithreaded and Multiprocessor Architectures Nagi N. Mekhiel Department of Electrical and Computer Engineering Ryerson University, Toronto, Ontario M5B 2K3
More informationMINIMAL EDGE-ORDERED SPANNING TREES USING A SELF-ADAPTING GENETIC ALGORITHM WITH MULTIPLE GENOMIC REPRESENTATIONS
Proceedings of Student/Faculty Research Day, CSIS, Pace University, May 5 th, 2006 MINIMAL EDGE-ORDERED SPANNING TREES USING A SELF-ADAPTING GENETIC ALGORITHM WITH MULTIPLE GENOMIC REPRESENTATIONS Richard
More information1.00 Introduction to Computers and Engineering Problem Solving. Quiz 1 March 7, 2003
1.00 Introduction to Computers and Engineering Problem Solving Quiz 1 March 7, 2003 Name: Email Address: TA: Section: You have 90 minutes to complete this exam. For coding questions, you do not need to
More informationChapter 3: Process Concept
Chapter 3: Process Concept Chapter 3: Process Concept Process Concept Process Scheduling Operations on Processes Inter-Process Communication (IPC) Communication in Client-Server Systems Objectives 3.2
More informationLecture 6 Sorting and Searching
Lecture 6 Sorting and Searching Sorting takes an unordered collection and makes it an ordered one. 1 2 3 4 5 6 77 42 35 12 101 5 1 2 3 4 5 6 5 12 35 42 77 101 There are many algorithms for sorting a list
More informationChapter 3: Process Concept
Chapter 3: Process Concept Chapter 3: Process Concept Process Concept Process Scheduling Operations on Processes Inter-Process Communication (IPC) Communication in Client-Server Systems Objectives 3.2
More informationCmpSci 187: Programming with Data Structures Spring 2015
CmpSci 187: Programming with Data Structures Spring 2015 Lecture #9 John Ridgway February 26, 2015 1 Recursive Definitions, Algorithms, and Programs Recursion in General In mathematics and computer science
More informationAbout this exam review
Final Exam Review About this exam review I ve prepared an outline of the material covered in class May not be totally complete! Exam may ask about things that were covered in class but not in this review
More informationBasics of Java: Expressions & Statements. Nathaniel Osgood CMPT 858 February 15, 2011
Basics of Java: Expressions & Statements Nathaniel Osgood CMPT 858 February 15, 2011 Java as a Formal Language Java supports many constructs that serve different functions Class & Interface declarations
More informationLecture Notes for Chapter 2: Getting Started
Instant download and all chapters Instructor's Manual Introduction To Algorithms 2nd Edition Thomas H. Cormen, Clara Lee, Erica Lin https://testbankdata.com/download/instructors-manual-introduction-algorithms-2ndedition-thomas-h-cormen-clara-lee-erica-lin/
More informationModule 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur
Module 4 Constraint satisfaction problems Lesson 10 Constraint satisfaction problems - II 4.5 Variable and Value Ordering A search algorithm for constraint satisfaction requires the order in which variables
More informationChapter 4: Processes. Process Concept
Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Process Concept An operating
More informationSorting. Bubble Sort. Selection Sort
Sorting In this class we will consider three sorting algorithms, that is, algorithms that will take as input an array of items, and then rearrange (sort) those items in increasing order within the array.
More informationBackTracking Introduction
Backtracking BackTracking Introduction Backtracking is used to solve problems in which a sequence of objects is chosen from a specified set so that the sequence satisfies some criterion. The classic example
More informationConflict Driven Learning and Non-chronological Backtracking
x1 + x4 Conflict Driven Learning and Conflict Driven Learning and x1 + x4 x1 x1=0 x1=0 Conflict Driven Learning and x1 + x4 x1 x1=0 x1=0 Conflict Driven Learning and x1 + x4 x1 x1=0, x4=1 x1=0 x4=1 Conflict
More informationUsing ODHeuristics To Solve Hard Mixed Integer Programming Problems. Alkis Vazacopoulos Robert Ashford Optimization Direct Inc.
Using ODHeuristics To Solve Hard Mixed Integer Programming Problems Alkis Vazacopoulos Robert Ashford Optimization Direct Inc. February 2017 Summary Challenges of Large Scale Optimization Exploiting parallel
More informationMagic Labelings on Cycles and Wheels
Magic Labelings on Cycles and Wheels Andrew Baker and Joe Sawada University of Guelph, Guelph, Ontario, Canada, N1G 2W1 {abaker04, jsawada}@uoguelph.ca Abstract. We present efficient algorithms to generate
More informationChapter 3: Processes. Operating System Concepts 8th Edition,
Chapter 3: Processes, Administrivia Friday: lab day. For Monday: Read Chapter 4. Written assignment due Wednesday, Feb. 25 see web site. 3.2 Outline What is a process? How is a process represented? Process
More informationPerformance impact of dynamic parallelism on different clustering algorithms
Performance impact of dynamic parallelism on different clustering algorithms Jeffrey DiMarco and Michela Taufer Computer and Information Sciences, University of Delaware E-mail: jdimarco@udel.edu, taufer@udel.edu
More information