The Pennsylvania State University. The Graduate School SEARCH-BASED MAXIMALLY PERMISSIVE DEADLOCK AVOIDANCE IN FLEXIBLE MANUFACTURING CELLS

Size: px
Start display at page:

Download "The Pennsylvania State University. The Graduate School SEARCH-BASED MAXIMALLY PERMISSIVE DEADLOCK AVOIDANCE IN FLEXIBLE MANUFACTURING CELLS"

Transcription

1 The Pennsylvania State University The Graduate School Department of Industrial and Manufacturing Engineering SEARCH-BASED MAXIMALLY PERMISSIVE DEADLOCK AVOIDANCE IN FLEXIBLE MANUFACTURING CELLS A Thesis in Industrial Engineering by Amiteshwar Singh Sidhu Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science August 2009

2 The thesis of Amiteshwar Singh Sidhu was reviewed and approved* by the following: ii Richard A. Wysk Professor and Leonhard Chair in Engineering Thesis Advisor Deborah J. Medeiros Associate Professor of Industrial Engineering M. Jeya Chandra Professor of Industrial Engineering Graduate Program Coordinator of Industrial and Manufacturing Engineering *Signatures are on file in the Graduate School

3 ABSTRACT iii A search based maximally permissive method for deadlock avoidance was developed in this research. The method developed is maximally permissive in that no safe part moves requests are denied. This leads to higher resource utilization than conservative approaches. Maximal permissiveness, however, is associated with a high computational expense. Two different tree search strategies, a depth first search and a hybrid search, were developed to perform the look ahead evaluation of part move requests. Strategies for reducing the amount of search involved in deadlock avoidance were developed and are presented. A strategy for avoiding repetitive search is also presented. A factorial experiment was conducted to evaluate the relative performance of the different deadlock avoidance algorithm combinations presented in this research. The results indicate that depth first search strategy when used with repeat avoidance performs better than the hybrid search. From another experiment it was observed that evaluation of safety of most part move requests did not require evaluation of a large number of reachable states. Only a small percentage of part move requests required evaluation of a large number of reachable states.

4 TABLE OF CONTENTS iv LIST OF FIGURES...vii LIST OF TABLES...ix Chapter 1 INTRODUCTION Definition Deadlocks in Flexible Manufacturing Systems System Representation Using Directed Graphs Part Flow and Impending Deadlocks Deadlock Resolution Approaches Deadlock Detection and Recovery Deadlock Prevention Deadlock Avoidance Maximally Permissive Deadlock Avoidance Algorithms Research Objective Thesis Overview...12 Chapter 2 LITERATURE REVIEW Introduction Directed Graphs Circuits and Strongly Connected Components Conservative Deadlock Avoidance Maximally Permissive Deadlock Avoidance Summary...29 Chapter 3 LOOK AHEAD SEARCH Introduction Assumptions System State Representation State Generation Free Path Reduction State Evaluation Circuit Detection Directed Graph Preparation Circuit Detection Algorithm Circuit Detection Example Generation and Exploration of State Space Depth First Search Depth First Search Algorithm Discussion...71

5 3.9.3 Ordering of Parts Hybrid Search Hybrid Search Algorithm Hybrid Search Example Discussion of the Scoring Method Repetitive Search in Look Ahead Summary Chapter 4 PRE-SOLVE PROCEDURES Introduction Non-Bounded Graphs Computing Strongly Connected Components Theoretic Validity of SCC Reduction Introduction Lemmas ELLS Algorithm Proof of Validity of SCC Reduction Application of SCC Reduction SCC Analysis Relationships Between n, m and e Definitions and Some Results Safety of m < n = e Free Path Reduction Circuit Sufficiency Application of Pre-Solve Procedures Summary Chapter 5 EXPERIMENTS Introduction Logical Components of Proposed Solution Experimental Objective Chapter Outline Experimental Environment Comparison of Algorithm Combinations Experimental Design Analysis Interpretation of Results Analysis of Number of States Evaluated Summary Chapter 6 CONCLUSIONS AND FUTURE WORK Introduction Conclusions v

6 6.3 Future Work Bibliography vi

7 LIST OF FIGURES vii Figure 1.1 Deadlock in a Flexible Manufacturing Cell...3 Figure 1.2 Directed Graph Representation...6 Figure 1.3 Illustration of an Impending Deadlock...7 Figure 3.1 Outline of Proposed Methodology...32 Figure 3.2 Pseudo code for deriving the allocation vector...37 Figure 3.3 System State Example...39 Figure 3.4 Pseudo code for state generation...41 Figure 3.5 Pseudo code for state evaluation...45 Figure 3.6 Directed Graph Preparation for Circuit Detection...47 Figure 3.7 Flowchart of the Calling Function...50 Figure 3.8 Flowchart of the Recursive Function (DFS_Circuits)...52 Figure 3.9 Directed Graph of Circuit Detection Example...57 Figure 3.10 Flowchart of DFS Tree Search Algorithm...66 Figure 3.11 Pseudo code of the calling procedure of the Depth First Search...69 Figure 3.12 Pseudo code of the recursive function of the Depth First Search...70 Figure 3.13 Depth First Search Tree: s ROOT and s Figure 3.14 Depth First Search Tree...74 Figure 3.15 Complete Depth First Search Tree...75 Figure 3.16 Flowchart of Hybrid Search Procedure...80 Figure 3.17 Pseudo code of calling procedure of HS algorithm...83 Figure 3.18 Pseudo code of the scoring procedure...84 Figure 3.19 Pseudo code of the main procedure of HS algorithm...85 Figure 3.20 Tree generated for the Hybrid Search example...92

8 viii Figure 3.21 Partial Repetitive Search Tree...96 Figure 3.22 Repetitive Search Example Figure 4.1 Pseudo code for establishing non-bounded graph Figure 4.2 Example of a Non-Bounded Graph Figure 4.3 Strongly Connected Components Figure 4.4 Example of a Condensation Graph Figure 4.5 Example of State Subset Figure 4.6 Pseudo code for the ELLS algorithm Figure 4.7 SCC Reduction Example Original Graph Figure 4.8 SCC Reduction Example Reduced Graph after SCC Reduction Figure 4.9 Examples of Shared and Non-Shared Segments Figure 4.6 Flow Chart of the Deadlock Avoidance Methodology Figure 5.1 Screenshot of the simulation of an FMS Figure 5.2 Normal Probability plot of Effects Estimates Figure 5.3 Normal Probability plot of Residuals Figure 5.4 Plot of Residuals versus Predicted Response Figure 5.5 Plot of the BD interaction Figure 5.6 Plot of Predicted Response Figure 5.7 Histogram of Number of States Evaluated Figure 5.8 Proportion of Unsafe Requests...179

9 LIST OF TABLES ix Table 2.1 Basic Directed Graph Definitions...17 Table 2.2 Requirements and Performance Indicators for Deadlock Avoidance Solutions...24 Table 2.3 Factors and their Levels in Experiments Conducted by Hosack et al [2003]...25 Table 3.1 Part Routings for System State Example...38 Table 3.2 Immediate Next Processing Locations of Parts...56 Table 3.3 Basic Definitions relating to Trees...61 Table 3.4 Part Routings for HS Algorithm example...86 Table 3.5 Part Routings for system state example...99 Table 4.1 Part Routings for example Table 4.2 Part Routings for State Subset Example Table 4.3 Part Routings for SCC Reduction example Table 5.1 Recipes of Deadlock Avoidance Algorithms Table 5.2 System Parameters Table 5.3 Part Routings of the FMS illustrated in Figure Table 5.4 Logical Components and the Options Table 5.5 Design Matrix and Experimental Data Table 5.6 Effects Estimates Table 5.7 Analysis of Variance...173

10 1 Chapter 1 INTRODUCTION 1.1 Definition A set, of two or more, entities is said to be in a deadlock state when every entity is waiting for a resource that is held by another entity in the same set. This situation results in a circular wait condition in which entities are unable to release resources until they are granted resources that are held by other entities. All entities and resources involved remain in an indefinite circular wait until recovery events restore regular flow. Deadlocks occur in computer operating systems, distributed database systems, automated guided vehicles and flexible manufacturing systems (FMSs). These systems share certain flow pre-requisites that have been identified by Coffman et al [1971] as four necessary conditions for the occurrence of deadlocks. All of the following four conditions must hold for a deadlock to occur; 1. Mutual Exclusion: Entities claim exclusive use of resources they require. At least one resource must be held in a non-sharable mode in order to satisfy this condition.

11 2 2. Hold and Wait: An entity will hold the resource currently allocated to it while it waits for the next resource. 3. No Preemption: An entity cannot be forcibly removed from its currently allocated resource until it has received the requested service to completion. 4. Circular Wait: There must exist a set {P 0, P 1 P n } of waiting entities such that P 0 is waiting for a resource that is held by P 1, P 1 is waiting for a resource that is held by P 2, P n-1 is waiting for a resource that is held by P n and P n is waiting for a resource that is held by P Deadlocks in Flexible Manufacturing Systems Most FMS systems are comprised of three sub systems; a set of workstations, that are often automated CNC machines, connected by a programmable material handling system and a central control computer, which supervises process activities and the material handling. Wysk et al [1991] define a direct address material handling system to be a material handling system that specifically serves a single machine resource at a time. Robots and Automated Guided Vehicles (AGVs) are examples of direct address systems. FMSs that employ direct address material handling systems can experience the necessary conditions listed in Section 1.1, and are therefore susceptible to deadlocks.

12 3 Figure 1.1 (a) and Figure 1.1(b) depict an example FMS cell. The cell consists of five single capacity resources labeled M 1, M 2, M 5. Parts requiring processing at this cell arrive via the conveyor labeled IP. On completion of required processing, parts leave via the conveyor labeled OP. A direct address robot labeled R provides material handling within the cell. The end-affecter of the robot is assumed to have unit capacity. IP and OP can only hold parts arriving to and departing from the cell respectively and may not be used as in-process buffers. No buffer storage exists in this cell. Figure 1.1 (a) depicts parts resource assignment in which three parts labeled A, B and C are assigned to resources M 2, M 3 and M 5 respectively. ( a ) ( b ) Figure 1.1 Deadlock in a Flexible Manufacturing Cell

13 4 Consider part routings of A, B and C to be as depicted in Figure 1.1(a). Consider part A completes at M 2 and requests the FMS controller for assignment to the next resource in its routing; M 1. Figure 1.1(b) depicts the parts resource assignment that would be realized if the controller allows this request and part A is transported to M 1. A circular wait relationship is realized with this assignment. Subsequent request by the part A for further service at M 5 can not be granted as part C will be waiting to move to M 3, which is occupied by part B waiting to be moved to M 1. Parts A, B and C are said to be deadlocked and will remain in circular wait until some recovery actions restore production flow. 1.3 System Representation Using Directed Graphs Several representation methodologies have been used by different deadlock resolution approaches. Directed Graphs, Finite State Machines and Petri Net models have been used for FMS representation. Fanti et al [2004] provide a literature review of previous and current research in each of these methodologies. This research employs directed graphs to represent the part flows through the FMS. A directed graph G = (V, A) is a collection of a set of vertices V, with connections

14 5 or arcs specified between certain pairs of vertices. An arc a ij = (v i, v j ) represents a connection between v i and v j, where a ij A and v i, v j V. The connection is directional and conveys the sense from v i to v j. Arrows, on arcs, are used to encode directional information. Thus the arc a ij has an arrow pointing to v j. Directed graphs are used to model discrete and continuous flows through different systems. Progression of pieces on a chessboard and of traffic through a network of roads are some example implementations of directed graphs. An FMS can be represented by a directed graph, G = (V, A), where V, the set of vertices, represents the set of FMS resources, and A, the set of arcs, represents some mapping of parts routing. Let P t represent the set of parts in the system at time t, let R represent the routing of some part, p i P t that is currently at its l th -processing step and let R = k. Routing R maybe represented as the set R = {r 1, r 2, r l, r k }, where all elements of R V. A mapping of all pending transitions of p i would constitute the set A pi = {all arcs (r j, r j+1 ) l j k}, then, A = A q (over all q P) would represent the mapping of all pending transitions of parts currently in the system. Symbolically, the resources of FMS (vertices of the directed graph), are depicted as empty circles labeled with the resource identification next to it. Parts currently assigned to a resource are depicted by placing the part identification label inside the

15 6 circles representing that resource. Figure 1.2 depicts the directed graph representation of the parts assignment corresponding the Figure 1.1(a). The arcs are drawn using the parts transition mapping described in this section. Figure 1.2 Directed Graph Representation 1.4 Part Flow and Impending Deadlocks Two types of part deadlock situations exist in an FMS. Kumaran et al [1995] present definitions of these situations. These definitions are presented here. 1. Part Flow Deadlock: Figure 1.1(b), depicts a deadlock situation in which the parts assignment leads to a circular wait in which all part moves are inhibited, indefinitely, unless an external event disrupts the circular wait relationship amongst the parts. Such a deadlock in which none of the parts involved can move forward is defined as a part flow deadlock.

16 7 2. Impending Deadlock: An impending deadlock is a part assignment in which some or all parts can be moved forward, however the system is some finite steps short of a part flow deadlock. Thus a parts assignment is said to be in an impending deadlock state if it is not involved in an immediate part flow deadlock, but for which there exists no sequence of part moves that would prevent an eventual part flow deadlock. Figure 1.3 Illustration of an Impending Deadlock Figure 1.3 depicts a directed graph representation of a parts assignment that has an impending deadlock. Part P 1, currently at M 1 needs further processing at M 2 and M 3 in order to finish. Part P 2, currently at M 3 needs further processing at M 2 and M 1 before finishing. No part sequence of part moves can prevent the system from entering into a part-flow deadlock.

17 1.5 Deadlock Resolution Approaches 8 Silberschatz et al [1997] list three classes of deadlock resolution approaches. These approaches are outlined separately in the three subsections that follow Deadlock Detection and Recovery Systems employing a detection and recovery method for deadlock handling provide a detection routine which examines the state of the system to determine whether a deadlock has occurred. Upon detection of a deadlock a recovery procedure is called which rolls the system out of the deadlock state. This is done by temporarily transferring one of the parts involved in the deadlock to a storage location Deadlock Prevention Prevention methods provide deadlock free system operation by ensuring that the entire set of necessary conditions for deadlock, as listed in Section 1.1, cannot be simultaneously satisfied. These methods prevent deadlocks by restricting how part flow requests can be made so that necessary deadlock conditions can never be entered.

18 9 Fanti et al [1997] remark that the first three of these conditions (Mutual Exclusions, Hold and Wait and No Preemption) are satisfied in most FMS and deadlock prevention methods elude deadlocks by ensuring that the fourth condition fails to hold. Examples of deadlock prevention include batch formulation strategies that ensure the parts in any given batch do not form circular wait relationships and are therefore deadlock free Deadlock Avoidance Avoidance methods address the problem of deadlocks by controlling how resources are granted to the parts requesting them. These methods require that the FMS controller has all of the routing information of all parts. Taking into account the current parts assignment and part routing information a deadlock avoidance procedure can decide whether or not a requested part assignment would lead to a deadlock. The requested part assignment is granted only if the resulting state is free of part flow and impending deadlocks. Reveliotis [2000] provides a scheme for selecting between a detection and recovery policy and an avoidance policy. The avoidance policy was employed any time the time cost of deadlock recovery crossed a threshold. Threshold was defined as a

19 function of system parameters, which included processing times and job loading and machine setup times. 10 Lawley et al [2000] establish the NP-Completeness of maximally permissive deadlock avoidance methods, as applied to a class of resource allocation systems. Heuristic methods are typically used to devise tractable deadlock avoidance procedures at the potential cost of rejecting some safe part move requests. 1.6 Maximally Permissive Deadlock Avoidance Algorithms Deadlock avoidance algorithms (DAA), embedded in the FMS controller, typically evaluate each part move request. Upon evaluation the DAA may find the part move requests to be either safe or unsafe. A part move request is safe if there exists some subsequent sequence of part moves which would allow all parts in the system to complete all their processing requests, and exit the system without deadlock. This implies that the parts assignment resulting from the requested part move will be free of impending and part-flow deadlocks.

20 11 A DAA is said to be maximally permissive if it accepts all safe part move requests. Conversely a DAA is said to be conservative or less than maximally permissive if it rejects any safe part move requests. Zhang [2000] employs a factor for measuring the permissiveness of a given DAA. This factor is the ratio of the number of part move requests that are accepted by the given DAA to the actual number of safe part move requests. Yelcin et al [2000] show that maximally permissive DAA lead to better utilization of system resources as compared to conservative DAAs. However, this is achieved at the cost of higher computation times as argued by Lawley et al [1998]. 1.7 Research Objective This research employs and explores the application of a state space search based enumeration method to avoid deadlocks. Each of the DAAs presented in this research examines the requested part move by exhaustively searching for an emptying sequence of part moves. These methods are thus maximally permissive.

21 This research attempts to identify techniques to improve the performance of maximally permissive DAAs in the following ways: Finding ways to effectively perform exhaustive search for an emptying sequence of part moves, 2. Finding ways to reduce the search effort, and 3. Identifying sufficient conditions that can be examined to make correct decisions regarding the safety of part move requests. 1.8 Thesis Overview In Chapter 2 a review of relevant literature is presented. The literature reviewed is categorized and discussed in four categories. Sources of literature on directed graphs, circuits and strongly connected components are outlined in the first three sections. This is followed by a review of deadlock avoidance methods employing directed graph based system representation. This review is organized in two sections. The first of these sections examines some relevant contributions in the field of conservative avoidance techniques. The later section is devoted to an examination of some maximally permissive avoidance literature.

22 13 Chapter 3 presents the search methods used for identifying a feasible emptying sequence of part moves. The first method employs a depth first search, and the latter employs a hybrid search algorithm. Later in the chapter the significance of avoiding repetitive search is presented followed by a method for preventing more than one examination of the same state while searching for an emptying sequence of part moves. Chapter 4 presents a set of sufficiency conditions along with some techniques that help reduce the number of states searched during the evaluation of part moves. Details of these conditions and their implementation in the proposed DAA are presented. Chapter 5 discusses the implementation of the proposed DAA using discrete event simulation. Different options presented in the Chapters 3 and 4 are evaluated using a series of experiments. These experiments and their results are presented. Chapter 6 concludes presents conclusions and suggestions for further work.

23 14 Chapter 2 LITERATURE REVIEW 2.1 Introduction In this chapter a review of some of the deadlock avoidance literature relevant to this research is presented. This review focuses on avoidance research, which uses directed graphs for FMS representation. Fanti et al [2004] present a comprehensive review of the research work in the field of deadlock resolution as applied to automated manufacturing systems. Their paper surveys three different modeling methods commonly employed to describe the FMS and the interaction between jobs. The three modeling methods surveyed are: Directed Graphs Automata Petri Nets Their paper presents a tutorial survey of models and solution strategies that reflect the evolution of deadlock control methods over the ten year period preceding their

24 publication. A description of each of the three resolution methods, as listed in Section 1.5, is presented along with examples and literature review. 15 FMS deadlock problems have occurred in different configurations of FMS systems. Consequently research has been conducted and solutions have been presented to control deadlocks in different configurations of FMSs. By viewing an FMS as a resource allocation system (RAS), their paper classifies the research into the following categories: 1. Single Unit RAS Systems in which jobs require a single unit of a single resource for each of their requests. This is often abbreviated as SU-RAS. 2. Single Type RAS Systems in which jobs require several units of a single resource for each of their requests. 3. Conjunctive RAS Systems in which jobs require an arbitrary number of units of each resource from a set of resources for each of their process stages. 4. Conjunctive/ Disjunctive RAS Systems in which every process stage of a job poses a finite number of alternative Conjunctive type resource requests. The remainder of this chapter is comprised of two focii. The first half, consisting of Sections 2.2 and 2.3, provides references and reviews of literature that provides a background on the system representation methodology used in the subsequent modeling. The second half, consisting of Sections 2.4 and 2.5, provides review of deadlock

25 16 avoidance research. Section 2.4 presents outlines of four papers that develop conservative avoidance methods. Section 2.5 presents outlines of three maximally permissive deadlock avoidance methods. 2.2 Directed Graphs Robinson et al [1980] and Bang-Jensen et al [2001] provide comprehensive description of the basic directed graph terminology. Table 2.1 lists definitions of some basic directed graph terminology that is used in subsequent chapters. Results on directed graphs, along with proofs and examples can be found in these two sources.

26 Table 2.1 Basic Directed Graph Definitions 17 Path Path P in a directed graph D = (V,A) is a finite sequence consisting of vertices and arcs of D alternatively such that P begins and ends with vertices. Thus; P = u 1, (u 1,u 2 ), u 2, (u 2,u 3 ), (u n-1,u n ), u n where, u i V for 1 i n, and (u i,u i+1 ) A Simple Path A path in which all vertices are distinct is called a simple path. Reachable If u and v are vertices in a directed graph D and there exists at least one path with its first vertex u and last vertex v; then v is said to be reachable from u. Sub-digraph If D = (V,A) is a directed graph, and U is a non-empty subset of V, then the directed graph (U, A (UxU) ), whose set of vertices is U and whose arcs are those arcs of D which both begin and end in U is termed as the sub-digraph of U. Partial-digraph If D = (V,A) is a directed graph and B is some subset of A then the directed graph (V,B) is termed as a partial digraph of D. Equivalence If u and v are vertices in the directed graph D and each is reachable from the other in D then u is said to be equivalent to v. Strongly Connected Components If u is a vertex in a directed graph D, then the set of vertices equivalent to u is called the strongly connected component (SCC) of u and is symbolized by c(u). Since components are equivalence classes the SCCs defined by two vertices are either the same set or they have no vertex in common. One of the first papers utilizing directed graph system representation to address deadlocks in FMS was presented by Wysk et al [1991]. They addressed the problem by proposing a deadlock detection solution. The publication was seminal in that it identified deadlocks in FMS as a significant problem. It drew parallels and differences between existing deadlock research in the computer operating systems and it also laid down necessary and sufficient conditions for deadlocks in FMSs. Their solution employed

27 18 directed graph and string manipulation techniques. The directed graphs were used to describe wait relationships between parts. A step-by-step procedure, used to generate the directed graph system representation, was presented along with examples. Kumaran et al [1994] presented a graph theoretic deadlock detection and avoidance procedure. They defined and proposed the use of higher order graphs as a means for look ahead search for identifying impending deadlocks. Their research built upon some of the work of Cho et al [1995], which was significant owing to their conceptualization of bounded graphs. A bounded graph is a special directed graph representation, which is constructed by traversing the routing of a part from its current location until possibly its completion. Traversal is continued and directed arcs are drawn between subsequent destination machines until a non-empty vertex, representing a machine occupied by a part, is encountered. If the part encountered is other than the current part, the routing of the encountered part is traversed in a similar manner until either a previously traversed part is encountered or the part routing is exhausted. Different research works have used different directed graphs to represent FMS system states. Fanti et al [1997] used two kinds of directed graphs for system representation. These are described in review of their paper in Section 2.4. This research employs two different directed graph representations to model parts interaction in two different contexts. Details of these directed graph representations are presented in Sections 3.6 and 4.2.

28 2.3 Circuits and Strongly Connected Components 19 Mateti et al [1976] provide a listing of twenty-one algorithms used for enumerating all circuits of a given directed graph. They classify the algorithms into four categories depending on their underlying approach to compute the circuits: 1. Circuit Vector Space for Undirected Graphs 2. Search Algorithms 3. Powers of adjacency matrix 4. Edge-Digraph They provide a scheme for comparing the space and time bounds of these algorithms. They list a set of initial graph simplification techniques that can be used to reduce the size of the directed graph provided as input. Of the algorithm types analyzed the search algorithms employing backtracking approach were found to be the fastest. An adapted depth first search based circuit detection algorithm has been developed in this research. Details of this algorithm are presented in Section 3.6. Strongly Connected Components (SCCs), formally defined in Table 2.1, are maximal sets of vertices that are mutually reachable from one another. SCCs partition the directed graph and lend themselves as a natural means of problem reduction.

29 If D 1, D 2,, D t are the strong components of the directed graph D=(V,A), 20 and v(d i ) = the set of vertices comprising D i, then v(d 1 ) v(d 2 ) v(d t ) = V, the set of vertices of D and v(d i ) v(d j ) = (empty-set), for all i j. Cormen et al [2001] provide descriptions, pseudo-codes and examples of many directed graph algorithms. They cite the following three linear time algorithms that compute the SCCs of directed graphs: 1. Kosaraju s Algorithm, 2. Tarjan s Algorithm, and 3. Gabow s Algorithm This research used an adaptation of Kosaraju s algorithm found in Cormen at al [2001]. A detailed outline of this algorithm is presented in Section 4.3. This algorithm is a linear time (linear in the number of vertices and arcs of the directed graph input) algorithm. 2.4 Conservative Deadlock Avoidance Fanti et al [1997] use directed graph concepts to derive necessary and sufficient conditions for deadlock occurrence. Two distinct types of directed graph representations

30 21 are employed. It is assumed that a pre-defined set of part types are produced in the FMS. The routing of each part type is referred to as its working procedure. The first type of directed graph is called the working procedure digraph, which defines the whole sequence of resources that each job needs to acquire. Subsequent computations based on the working procedure digraph are performed offline. The other type of directed graph is named the transition digraph, which represents the current system state and future resource requirements. Using these two directed graph representations the authors formally define the concepts of a second order directed graph and a second order deadlock. A noteworthy contribution of this paper is the formal definition of Restricted Deadlocks. A restricted deadlock occurs when the deadlock avoidance policy incorrectly designates a set of safe part moves to be unsafe and creates a situation similar to a deadlock. The authors use their results to develop five deadlock restriction policies. Each policy is presented as a combination of two control laws, which are symbolized by the ordered pair (f 1, f 2 ). Control rule f 1 evaluates the safety of allowing the entry of new parts into the system. Control rule f 2 evaluates the safety of allowing part transitions between resources. Proofs substantiating correctness of each of these restriction policies are presented along with discussion of their computational complexity and examples. These policies are

31 tested under four different working conditions using a simulation model. Throughput is used as the performance measure. 22 Two of the five restriction policies employed concepts of second level deadlocks. Experimental results, under all four testing conditions, revealed higher throughputs for the policies that restricted part movement based on results of evaluation of second level deadlocks. Deadlock avoidance solutions have been proposed for different characteristics of systems and part flows. Zhang et al [2002] proposed a deadlock avoidance solution for systems, which allow parts with choices in routings. They employed a classification scheme that identifies part move requests as safe, unsafe and undetermined. This classification scheme is linear in complexity. The classification method employs construction of a dynamic path (a type of directed path). When called from the main procedure the dynamic path is constructed by traversing the routing of the part requesting the part move. The deadlock avoidance algorithm (DAA) terminates with a decision if the part request belongs to a class known to be either safe or unsafe. Part move requests classified as undetermined are subjected to a polynomial complexity, conservative procedure that is named empty_system algorithm. This algorithm is a virtual simulation; changes made during the execution of this algorithm are made in a data structure only and not applied to the system itself. The algorithm applies the classification procedure to each of the parts

32 that have (one of) their next resource(s) available. If the advancement of one of these parts can be classified as safe then the part is advanced virtually. 23 The empty_system algorithm re-evaluates the set of the parts with (one of) their next resource(s) available. All advancements classified as safe are made virtually. If at any stage no safe classifications exist; the algorithm arbitrarily picks an undetermined part advancement and applies it virtually. This process is continued until it empties all parts in the system, in which case the original part move request is granted, or if all parts cannot be emptied in this manner, the original part move request is denied. The empty_system algorithm does not backtrack and so the state space is not searched exhaustively. The complexity of this algorithm is polynomial in system size. It is, however, not clear how this algorithm would avoid Restricted Deadlocks. Their publication presents research results along with comparisons with other DAAs. The primary performance measure used for presenting results was the optimality factor that is the ratio of the number of safe moves allowed by the DAA to the actual number of safe moves. This ratio is a measure of permissiveness of the adopted approach. A maximally permissive approach would have an optimality factor of 1. Lawley et al [1998] presented design requirements and guidelines for new research in FMS deadlock avoidance. These are listed in Table 2.2.

33 Table 2.2 Requirements and Performance Indicators for Deadlock Avoidance Solutions 24 Correctness Scalability Operational Flexibility Configurability Provide guarantee of deadlock free system operation Remain tractable as the system size increases Not be too restrictive in the allocation of resources Lend itself to various modes of system operation They present a DAA that is named Resource Order Policy (RO). RO uses an ordering of FMS machines to categorize parts as right, left or undirected. The RO policy uses this information to generate a set of linear constraints that inhibit unsafe moves. The policy is conservative and some safe part moves may be inhibited by the constraint set. Different orderings of FMS machines produce different sets of constraints. It is shown that different constraint sets result in different amounts of permissiveness of the RO policy. The authors formulate an integer program that determines the optimal ordering of machines. Constraint sets generated from such an ordering seem to impose lesser restrictions on part moves than a naively chosen ordering. Hosack et al [2003] evaluate and compare performance of several deadlock avoidance policies in conjunction with scheduling rules as well as gauging the impact of due date tightness on system performance. They employed a simulation study. The FMS modeled consisted of six machines of unit capacity tended by AGV material handlers.

34 25 Experiments were established using deadlock avoidance policy, scheduling rule and due date tightness as the factors of consideration. Table 2.3 lists the different levels of these factors. Table 2.3 Factors and their Levels in Experiments Conducted by Hosack et al [2003] Deadlock Avoidance Policy Safe Capacity Banker s Algorithm Capacity Dictated by Work Modified Safe Capacity Scheduling Rule Earliest Due Date Critical Ratio First-In First-Out Shortest Processing Time (SPT) Truncated SPT Due Date Tightness Loose Moderate Tight Experimental results were calculated for the following five performance measures: 1. average number of jobs in system 2. average throughput 3. average flow time 4. average tardiness 5. average percentage jobs tardy No single deadlock avoidance policy was a clear winner for all performance measures. However experimental analysis revealed that the deadlock avoidance policy

35 factor had a much more significant impact on system performance than the scheduling rule factor Maximally Permissive Deadlock Avoidance Yelcin et al [2004] present a state space search based, maximally permissive, approach for avoiding deadlocks. A finite automaton is used for modeling the events in an FMS. This is done by developing separate automata for the plant and the parts routing. Part routing with choices can also be modeled with this approach. The two automata are synthesized to form a supervisor. The supervisor automata guarantees deadlock free control by restricting the FMS controller to only accept part move requests that belong to the synthesized set of allowable state transitions. Three DAAs are presented. All three algorithms employ a depth first state space search for an emptying sequence of moves over a directed graph. The basic algorithm starts by generating the reachable state space for the parts currently in the system. The algorithm uses depth first search to determine all deadlock free paths from the current state to the empty state. The search space is generated each time a new part enters the system. All requests of current parts are decided by looking up the reachability of the empty state from the requested state in the current state space.

36 27 The second algorithm searches for an emptying sequence of moves for each part move request. This algorithm does not compute all deadlock free paths, however the state space is generated for each part move request. The third algorithm is an enhancement of the second in that it employs deadlock detection to further limit the number of states examined during the search process. Results are presented separately for systems with and without choices in part routings. Primary performance measures for which results are tabulated are the average and maximum number of states explored during the look ahead search. Results were presented for four, six, eight and ten machine systems. Kumar et al [1998] identify a subclass of the Single-Unit RAS for which there exists a polynomial, maximally permissive, deadlock avoidance procedure. The paper starts by establishing a method of representing the current parts allocation to resources using a vector. This vector represents the current state of the FMS. The authors build on this to define the following aggregate sets of system states. These sets are listed as follows; 0 S the empty state; no parts in the FMS S r the set of reachable states, defined as the set of states reachable from 0 S S s the set of safe states, defined as the set of states from which 0 S is reachable S u the set of unsafe states, defined as the compliment of S s S d the set of (part flow) deadlock states

37 28 The set S u S d represents the set of deadlock free unsafe states. All impending deadlocks belong to this set. The authors establish a subclass of Single-Unit RAS for which they prove S u S d =. Thus if the system is not already deadlocked it is in a safe state. This results in a single-step look ahead detection providing scalable and maximally permissive deadlock avoidance. Lawley et al [2001] frame the safety question as: Given a set of resources, a set of partially completed processes, and their corresponding resource sequences, is there a sequence of resource allocations that completes every process and deallocates all resources. Membership of the general safety question for the Single-Unit RAS systems to the class of NP-Complete problems is established by providing a polynomial reduction from the 3-SAT problem. Since the 3-SAT problem is a known NP-Complete problem therefore a polynomial reduction to or from another problem is sufficient to establish the other problem to be a NP-Complete. Deadlock avoidance research has identified some subclasses of the Single-Unit RAS which have been proven to be free of any reachable deadlock free unsafe states. Every possible allocation in these subclasses is either safe or deadlocked. Thus a polynomial, single-step, look ahead for deadlock is the maximally permissive deadlock avoidance procedure for these subclasses.

38 29 Lawley et al [2001] add to these results by presenting two new subclasses. Additionally, structures which have been exploited by existing research to develop maximally permissive deadlock avoidance procedures of polynomial complexity are categorized. The following three categories are created: 1. Resource Capacity This category accommodates the subclass of Single- Unit RAS under which every resource type has capacity greater than one. 2. Sequence Restrictions Formation of circuits, which are necessary for deadlocks to occur, takes place due to interaction between process sequences. Policies belonging to this category impose sufficient limitation on the interaction between process sequences such that deadlock-free unsafe states do not arise. 3. Central Buffering The policies that are classified into this category apply to FMSs, which have central buffers that may be used to free up capacity at certain bottleneck machines. 2.6 Summary This chapter provided a review of some literature that was relevant to this thesis. A summary description of some literature on directed graphs was presented along with a listing of some basic directed graph definitions. Some literature on circuits and strongly

39 connected components was reviewed. Relevant literature on conservative and maximally permissive deadlock avoidance was also reviewed. 30

40 31 Chapter 3 LOOK AHEAD SEARCH 3.1 Introduction The look ahead deadlock avoidance solution proposed in this thesis is comprised of the following main components; 1. Pre-Solve Procedures 2. Complete Look Ahead Search Figure 3.1 and the discussion that follows outline the proposed deadlock avoidance methodology.

41 32 Figure 3.1 Outline of Proposed Methodology Pre-Solve procedures attempt to reduce the search space and try to make a decision on the part move request. If a decision is made in the safety of the part move request by Pre-Solve procedures then no further evaluation is required. Pre-Solve procedures are discussed in detail in Chapter 4. If pre-solve procedures can not make a decision on the safety of the part move request then the complete look ahead search is invoked. The look ahead search searches for a reachable empty state.

42 33 This chapter presents the look ahead search procedure used to perform the complete enumerative search. The look ahead algorithm generates and evaluates states corresponding to the reachable part assignments until either an empty state is found or until all states have been examined. The look ahead search methodology presented in this thesis is carried out using a tree search. The tree search systematically generates and evaluates part assignments reachable from the part assignment being evaluated. Each part assignment is referred to as a system state. The discussion in the remainder of this chapter requires formal description of system state. Section 3.3 describes how system state is represented in this research. Section 3.4 describes the function used to generate states during tree search along with an example. Each state that is generated during the course of the tree search is evaluated to determine if further states need to be generated from this state. State evaluation function is used to evaluate all generated states. Given a state the state evaluation function evaluates states as safe, unsafe or undetermined. Two methods of state evaluation were developed and separately implemented. Section 3.6 presents an overview of these state evaluation functions.

43 34 Circuit detection forms one of the methods used in this research for state evaluation. A depth first search based circuit detection algorithm is employed to perform the circuit sufficiency state evaluation. Details of this procedure are presented in Section 3.7. Tree search algorithms presented in this research proceed by systematically generating and evaluating states reachable from the state that is to be evaluated. The set of states generated at a point of the tree search constitute the search state space. Section 3.8 presents some concepts and terminology relating to the generation and exploration of the state space. Two distinct tree search algorithms were developed for separate implementation. Section 3.9 describes a depth first search algorithm along with an example. Section 3.10 describes a hybrid search algorithm along with an example. Section 3.10 illustrates an important aspect associated with the generation of the search space. A large proportion of the states were repeated along different branches of the look ahead search tree. The extent of the repetition and a mechanism to avoid revisits to previously visited states is presented in Section 3.11.

44 3.2 Assumptions 35 The proposed deadlock avoidance solution may be used to avoid deadlocks in a particular class of manufacturing systems. This section lists the set of assumptions made about the class of manufacturing systems considered. 1. All machines have unit capacities 2. Machine breakdowns are not considered 3. Part routings do not have choices 4. Parts claim exclusive use of the machines they require 5. A part holds the machines currently allocated to it while it waits for its next machine 6. Pre-emption is not considered, thus a part can not be forcibly removed from its currently allocated resource until it has received the requested service to completion 7. Parts require a single machine for each of their requests. Such a class of systems has been referred to, in deadlock avoidance literature, as single unit resource allocation system. This is often abbreviated as SU-RAS 8. A single direct-address material handler services all part movement requests accepted by the deadlock avoidance controller 9. The material handler is equipped with an end-effector of unit capacity 10. Storage buffers are not considered

45 Parts requiring processing at this system arrive via the input conveyor. On completion of the required processing parts leave via the output conveyor. Each of the input and output conveyors can only hold parts arriving to and departing from the system respectively and may not be used as storage buffers 12. After receiving service at its last requested machine a part is transferred to the output conveyor. It is assumed that the output conveyor is does not fill up and block other parts that require to be transferred to the output conveyor 3.3 System State Representation This section establishes a nomenclature for representing system state and description of some related terminology along with an example. Let W represent the set of all workstations or processing resources that comprise the FMS. At any time a given resource W(i) W may either be empty or assigned. If W(i) is assigned, it is assigned to exactly one part. Let P represent the dynamic set of parts assigned to workstations in the FMS at some time or state. Each part P(j) P is assigned to exactly one workstation at any time.

46 37 Each part requires processing at a certain subset of workstations in a certain sequence. The sequence of processing requirements of P(j) is defined in the part routing represented by the vector R(j). Further R(j,k) holds the identifier of the resource required for processing the k th step of part P(j). A processing stage vector is used to track the current processing step of each part in P. The processing stage vector is represented by a. At any time (or state), a(j) represents the processing stage of the j th part in P, P(j), at that time (or state). Further, let L represent the allocation vector, which is an array with W elements, whose i th element, L(i) holds the identifier of the part assigned to the i th resource, W(i), at the given time (or state). If W(i), is empty, then the null part identifier,, is assigned to L(i). The allocation vector may be derived from the part set P, the parts routing R and the processing stage vector, a using the pseudo-code listed in Figure 3.2; L = /* set all elements of L to */ index = 1 while index P current_workstn = R( index, a(index) ) L(index) = P(index) index = index + 1 end while Figure 3.2 Pseudo code for deriving the allocation vector

47 38 The state of the system at any time uniquely specifies the allocation of parts to workstations. The pair (L, a) is used in subsequent sections to represent the state of the system. As an illustration of the nomenclature used for representing a state of the system, consider a four workstation SU-RAS in which P, the set of parts, is comprised of three parts, P = [A, B, C]. Workstations are identified with numbers; W = [1, 2, 3, 4]. The routing of these parts is listed in Table 3.1. Table 3.1 Part Routings for System State Example Part A B C Assuming that all three parts are at their respective first processing stage, the processing stage vector is specified, a = [1, 1, 1]. This implies part A is allocated to workstation 1, part B is allocated to workstation 2 and part C is allocated to workstation 3. Further, the allocation vector L, may be established using the procedure presented as L = [ A, B, C, φ], where the fourth element; the null part identifier, indicates that workstation 4 is empty. Figure 3.3 depicts a representation of this assignment to an example four workstation FMS.

48 39 Workstations in Figure 3.3 are labeled 1, 2, 3 and 4 represent the set of workstations, W = [1, 2, 3, 4]. Parts requiring processing at this cell arrive via the Input Conveyor. On completion of required processing, parts leave via the Output Conveyor. A direct address robot provides material handling within the cell. Each of the input and output conveyors can only hold parts arriving to and departing from the cell respectively and may not be used as in-process buffers. It is assumed that the output conveyor does not fill up and block the cell. No buffer storage exists in this cell. Figure 3.3 System State Example

49 3.4 State Generation 40 The tree search involves generation and evaluation of reachable states. This section provides a description of the function used to generate individual states during the tree search. Let Γ(L, a, j) represent the state generation function which is defined for all parts whose next processing resource is available, i.e. j L( R(j, a(j)+1) ) =, where 1 j P. Given a state (L, a), as input, the state generation function returns the state corresponding to the advancement of the j th part in (L, a) to the next workstation in this state. It may be noted that the set of all states immediately reachable from (L, a) maybe represented by the following aggregate set; U L[ R( j, a( j) + 1)], j 1 j P Γ( L, a, j) In the pseudo-code depicted in Figure 3.4, a call is made to the state generation function from some calling procedure. The current state information (L, a) is passed to the state generation function. The state generation function generates the requested state by making a copy of the allocation vector of the current state and applying the part

50 advancement to the copy of the allocation vector. The generated state is returned to the calling procedure where it is stored in the L CHILD, allocation vector. 41 /* calling function */ a(j) = a(j) + 1 L CHILD = Γ(L, a, j) /* state generation function */ Γ(L, a, j) L COPY = L prev_workstn = R( j, a(j)-1 ) L COPY (prev_workstn) = if a(j) < R(j) current_workstn = R( j, a(j) ) L COPY (current_workstn) = P( j) end if return L COPY end function Figure 3.4 Pseudo code for state generation As an example, consider the three part, four workstation example depicted in Figure 3.3. Of the three parts, only part B can advance to its next workstation. Parts A and C are blocked under the given parts assignment. In order to generate the state corresponding to the advancement of part B to its next processing location the state generation function is called. The function is called by passing the current state, ([ A, B, C, φ],[1,1,1 ] ), and j is the index of part B in the set P = [ A, B, C], i.e. j = 2.

51 42 The state generation function copies the parent state, de-allocates the current processing location part B which is the workstation 2 and allocates the subsequent processing location of part B i.e. workstation 4. The function returns the allocation vector [ A, φ, C, B] of the generated state to the calling procedure. 3.5 Free Path Reduction Any part which has all its subsequent destination resources empty is said to have a free path. At any state s j, let P( s j ) be the set of parts in the system. If any part, p P s ), has a free path then free path reduction will remove p k from P( s j ) and all k ( j parts in P( s j ) p will have the same workstation assignments. Free path reduction is a k procedure that identifies all parts with free paths and removes them from any further consideration during subsequent evaluation of the state currently being examined. Free path reduction routine is called each time a new node is generated. The routine removes all parts having free paths. Since it is possible that removal of a part might create additional free paths, therefore this reduction routine is called repeatedly until none of the parts that remain have free paths.

52 43 Free path reduction checks for parts with free paths and removes them from consideration in further searching for an emptying sequence thereby reducing the search effort. It is important to highlight that free path reduction only removes parts from consideration during the evaluation of the current state and the subsequent descendant states and that these parts are restored if the search has to backtrack to a state which is an ancestor of the current state. 3.6 State Evaluation Every state generated by the state generation function is evaluated with a state evaluation function. The state evaluation function accepts the state if it satisfies some evaluation criteria and rejects it otherwise. This section outlines two state evaluation criterions. Let E(L, a) represent the state evaluation function. E(L, a) determines if the input state, (L, a), is to be considered for further state exploration. The state evaluation function receives any state, which requires evaluation. It returns ACCEPT if the state being evaluated, (L, a), is accepted by the evaluation criteria and DONOT ACCEPT otherwise.

53 44 Two different evaluation criterions were developed. One of the evaluation criterions accepts all states in which at least one part can be advanced to its next processing location. This criteria for state evaluation is referred to as the FLUSH method. The other evaluation criteria attempts to determine if the state being evaluated is an unsafe state. It determines a state to be unsafe if it has a fully populated circuit thus indicating a part flow deadlock. This criterion for state evaluation is referred to as the Circuit Sufficiency method. (a) FLUSH Method This method accepts any state in which at least one of the parts can advance to its next processing location. FLUSH can be used to evaluate state in either of the two search algorithms presented in Section 3.9 (DFS) and Section 3.10 (HS). During a call to evaluate some state, (L, a) (say), using the FLUSH method, each part is examined to see if its next processing location is available in (L, a). If the next processing location is available, for any part, the state evaluation function accepts (L, a) and indicates this by returning ACCEPT. Conversely if no part in (L, a) can be advanced the state evaluation function does not accept (L, a) and indicates this by returning DONOT ACCEPT. Using notation from Section 3.3 the state evaluation function incorporating the FLUSH method may be represented with the pseudo-code listed in Figure 3.5.

54 45 /* state evaluation function */ E(L, a) j = 1 while j P next_workstn = R(j, a(j)+1) if L(next_workstn) = then return ACCEPT j = j + 1 end while return DONOT ACCEPT end function Figure 3.5 Pseudo code for state evaluation (b) Circuit Sufficiency Method This method is used to examine states for part flow deadlocks. A state is not accepted if it is found to have a part flow deadlock. All states which do not have any part flow deadlocks are accepted. The circuit sufficiency criteria can be used to evaluate states in either of the two search algorithms (DFS or HS) presented in Sections 3.9 and During a call to evaluate some state (L, a), with the circuit sufficiency method, a directed graph corresponding to the parts assignment in (L, a) is prepared. A depth first search detection procedure is called to systematically traverse the arcs and vertices of the directed graph. Detection of a circuit indicates a part flow deadlock and forms the basis for rejecting (L, a). Conversely absence of circuits indicates absence of part flow deadlocks and forms the basis for accepting (L, a). A discussion of the details of the circuit detection procedure is presented in Section 3.7.

55 3.7 Circuit Detection 46 An introduction to the circuit sufficiency method of evaluating states has been presented in Section 3.6. This method of state evaluation employs a circuit detection algorithm to search for circuits in a directed graph. This section describes the circuit detection algorithm in detail. The circuit detection algorithm employs a depth first search over a directed graph representation of the state being evaluated. Section presents the steps involved in the preparation of the directed graph. The directed graph is used as an input to the depth first circuit detection algorithm. Section describes the depth first search procedure along with its pseudo code. Section presents an example, which walks through the circuit detection algorithm Directed Graph Preparation A directed graph representation of some state, s (say), under evaluation is prepared by representing the workstations of the SU-RAS as vertices of the graph and adding one directed arc for each of the parts allocated in state s. The directed arc, for each part, is drawn from the vertex corresponding to its processing location, in state s, to the vertex corresponding to its immediate next processing location.

56 47 Exactly one arc is contributed by each part, therefore any circuit that is detected implies a fully populated circuit. A fully populated circuit is one in which each vertex of the circuit is occupied by a part. The directed graph used for finding circuits is defined and constructed as: D C = (V, A C ), where 1. V is the finite set of workstations 2. A C is the finite set of directed arcs that is the union of arcs representing the immediate next part transitions for each of the parts. A flow chart of the procedure for preparing the directed graph is presented in Figure 3.6. Figure 3.6 Directed Graph Preparation for Circuit Detection

57 3.7.2 Circuit Detection Algorithm 48 This method of circuit detection is applicable to SU-RAS systems with part routings without choices. This implies that at most one part may occupy a workstation in a given state. Further, absence of choices in part routings implies existence of exactly one outgoing arc for each vertex corresponding to a workstation occupied by a part. Since every vertex can have at most one outgoing arc, therefore no vertex may have more than one adjacent vertex. The circuit detection algorithm establishes a directed graph representation, corresponding to the state being evaluated, using the procedure for constructing the graph described in Section It then proceeds to search for circuits in the directed graph. Search for circuits is carried out using a depth first search methodology for traversing paths in the directed graph. The depth first search algorithm for detecting circuits consists of a calling function and a recursive function. The calling function iterates through the set of vertices of the directed graph with the purpose of selecting the lowest label undiscovered vertex. The selected vertex is submitted to the recursive function as a root of the new traversal. The recursive function accepts the vertex passed to it and checks if it has an outgoing arc. If an outgoing arc exists, it checks the vertex on which arc is incident. If the

58 vertex, on which the arc is incident, has not previously been encountered, it is selected for examination and passed as an argument to the recursive function. 49 The recursive function continues selection and examination of vertices that have not previously been encountered. If some vertex is found to have no outgoing arcs the depth first search steps out of recursion and returns to the calling functions. The recursive function thus traverses a path (Table 2.1). Path traversal is started from the vertex selected by the calling function and is continued until either a circuit is found or until it is established that this path is a simple path (Table 2.1) and is therefore free of circuits. A circuit is detected when, during the traversal of a path, a vertex is encountered that has previously been encountered on the same path. Upon detection of a circuit, the recursive function steps out of recursion and returns to the calling function. The return value indicates detection of a circuit. The calling function abandons further search and returns status of the state being evaluated as unsafe. If a path traversal completes without detecting a circuit the recursive function returns to the calling function. The calling function then seeks to begin a fresh traversal from the lowest label undiscovered vertex.

59 The logical schema of the main calling procedure of the circuit detection search is presented in Figure Figure 3.7 Flowchart of the Calling Function

60 51 A path is traversed until either a circuit is found or until it is established that this path is free of circuits. A path is determined to be free of circuits if either the traversal of a path reaches a vertex which has no outgoing arcs or if a vertex is encountered which has been encountered during a previous traversal. The case in which a path is determined to be acyclic upon encountering a vertex that has been encountered during a previous traversal requires discussion Continuation of circuit detection procedure, without termination, after one or more paths have been completely traversed, implies acyclicity of all previously traversed paths. Thus any path leading into a known acyclic path must also be acyclic. Therefore subsequent exploration along the currently traversed path would be redundant. The logical schema of the recursive function of the depth first circuit detection procedure is presented in Figure 3.8. A formal discussion and proof of this case is presented as follows.

61 Figure 3.8 Flowchart of the Recursive Function (DFS_Circuits) 52

62 53 Search proceeds by traversing different paths in the directed graph, D c. Each path is discovered as one depth first forest. Consider, at some stage of the search, traversal of a path leads to an examination of a vertex v j that has been examined and not found to have been encountered previously during the traversal of the current path. Instead v j has been discovered during traversal of a path that has been completely explored previously. Let the currently explored path be denoted by P(v 1 v j ) such that path traversal begins with vertex v 1 and v j is the most recently discovered vertex in the traversal of the current path. Let the previously traversed simple path containing v j be denoted by P(u 1 u n ) which is composed of n vertices, beginning with vertex u 1 and terminating in u n. P(u 1 u n ) may be represented as: P(u 1 u n ) = u 1, (u 1, u 2 ), u 2, u i, (u i, u i+1 ),, u n-1, (u n-1, u n ), u n, where u i = v j. Further, let P(u 1 u n ) be comprised of two sub-paths P(u 1 u i ) and P(u i+1 u n ), of P(u 1 u n ), such that the former is joined with the later, in P(u 1 u n ), by the directed arc (u i, u i+1 ) Since the out-degree of every vertex in D c can be at most one; this implies that paths in D c cannot diverge, i.e. no choices will exist in the traversal of paths. This establishes that if two paths intersect at any vertex the paths will converge into one beyond the vertex of intersection. This result is used to establish that the currently traversed path, upon completion of traversal, may be considered to be composed of two

63 sub-paths; P(v 1 v j ) and P(u i+1 u n ) in that order and connected by the directed arc (v j, u i+1 ). The composite path is referred to as P(v 1 u n ). 54 PROPOSITION: If during the course of the traversal of a path a vertex is encountered that has been encountered during the traversal of a previously explored path; it is safe to consider the currently traversed path to be acyclic. Thus any path leading into an acyclic path will also be acyclic. PROOF: Acyclicity of P(v 1 u n ) is established by establishing acyclicity of two sub-paths comprising P(v 1 u n ) and by establishing that the two sub-paths can be joined together in any combination without resulting in a circuit. Thus acyclicity of P(v 1 u n ) is established by defining the following three requirements: 1. Establishing acyclicity of P(v 1 v j ) 2. Establishing acyclicity of P(u i+1 u n ), and 3. Establishing P(v 1 v j ) and P(u i+1 u n ) as disjoint, (in the sense of not having any vertex in common) establishes that no combination of sub-path can result in a circuit.

64 55 The discussion that follows provides proofs that each of the three requirements for acyclicity of P(v 1 u n ) hold. 1. Acyclicity of P(v 1 v j ) The path P(v 1 v j ) is given to be acyclic. 2. Acyclicity of P(u i+1 u n ) The depth first search circuit detection procedure terminates upon detecting a circuit during the traversal of any path. The search procedure does not examine any further vertices or paths once the circuit has been detected. Continuation of the circuit detection procedure after traversal of P(u 1 u n ) establishes P(u 1 u n ) to be acyclic. Further since P(u 1 u n ) is acyclic all sub-paths must also be acyclic. This establishes acyclicity of P(u i+1 u n ). 3. Establishing P(v 1 v j ) and P(u i+1 u n ) as disjoint All vertices in P(v 1 v j ) with the exception of v j have been discovered during the traversal of the current path. All vertices in P(u i+1 u n ) have been discovered during a previous path traversal and v j = u i P(u i+1 u n ) (by definition). This leads to the conclusion that these paths are mutually disjoint. Since all three requirements are met it can be concluded that it is safe to consider any path leading into an acyclic path to be acyclic. As mentioned earlier, the main procedure seeks to begin fresh traversals from the lowest label, undiscovered vertex. In traversing a path rooted in vertex v i all outgoing arcs from vertices on this path to vertices with labels lower than i are ignored. This is

65 consistent with the proof presented earlier in this section and prevents redundant exploration of vertices Circuit Detection Example This section presents an example to illustrate the functioning of the circuit detection algorithm. A ten workstation SU-RAS system is considered. Workstations are represented by; W = [v 1, v 2, v 3,, v 10 ]. Parts belonging to the set P = [A, B, C,,G] are currently assigned as per the allocation vector; L = [C, E, B,,, A,, D, F, G]. Table 3.2 lists the immediate next processing locations for part in P. Table 3.2 Immediate Next Processing Locations of Parts A B C D E F G v 9 v 5 v 10 v 9 v 10 v 8 v 3 The directed graph D C is constructed using the procedure described in Section Figure 3.9 represents the directed graph created for this example.

66 57 Figure 3.9 Directed Graph of Circuit Detection Example The directed graph is used as an input by the circuit detection procedure. The procedure is comprised of two functions; the calling function and the depth first search function. Each vertex of the directed graph discovered during the course of the circuit direction procedure is associated with a root vertex. The vertex with which a vertex is associated is the root vertex of the transversal during which it was first discovered. Every root vertex has itself as its root vertex. This information is maintained in a vector named Root_Info, of size n, where n is the number of workstations in the system. Upon initialization, the calling procedure resets the Root_Info vector to null. It proceeds by searching for the lowest label vertex with a null root. Since at this stage all vertices have null as their root and vertex v 1 has the lowest label, therefore vertex v 1 is passed to the recursive depth first search function to initiate a new traversal.

67 58 The depth first search function checks for an outgoing arc from vertex 1. It identifies an outgoing arc from v 1 to v 10. v 10 currently has null as its root and therefore has not previously been discovered. A recursive call is made to the depth first search function and v 10 is passed along with v 1, which forms the root of the current traversal. The Root_Info for v 10 is set to v 1, indicating that it was discovered during the traversal rooted at v 1. Thus v 10, which was previously undiscovered, is added to the current traversal. In a similar fashion v 3 and v 5 are subsequently added to the current traversal. However v 5 does not have any outgoing arcs and the search steps out of recursion until it returns to the calling procedure. At this stage, Root_Info = [v 1,, v 1,, v 1,,,,, v 1 ] indicates that vertices v 1, v 3, v 5 and v 10 have been discovered during traversal rooted at v 1. The calling procedure function continues with selection of the lowest label undiscovered vertex. At this stage v 2 meets the criteria and is passed to the depth first search function. The depth first search function examines v 2 and identifies an outgoing arc to v 10. Since v 10 has been previously discovered during a previous traversal, which did not contain a circuit, therefore no circuit can exist on this traversal. The search steps out of recursion and returns to the calling function. The next vertex to be selected is v 4. It is passed to the depth first search function. However v 4 is found to have no outgoing arcs and the search returns to the calling function.

68 59 The next vertex to be selected is v 6. It is passed to the depth first function. Vertex v 6 is found to have an outgoing arc to v 9, which is undiscovered. A recursive call to the depth first function is made and v 9 is passed along with v 6, which is the root of current traversal. Vertex v 9 has an outgoing arc to the undiscovered v 8. A recursive call is made to the depth first search function and v 8 is passed along with v 6, which is the root of the current traversal. On examination, v 8 is found to have an outgoing arc to v 9, which has been previously discovered during the same traversal. This indicates the presence of a circuit and the search steps out of recursion to the calling function and returns a Circuit Detected status. At this stage Root_Info = [v 1, v 2, v 1, v 4, v 1, v 6,, v 6, v 6, v 1 ]. No further calls are made to the depth first search function. Upon detection of a circuit the calling function returns the status of the state being evaluated as Not Acceptable.

69 3.8 Generation and Exploration of State Space 60 Two tree search algorithms are presented later in this chapter. Both algorithms employ distinct schemes for systematically generating states reachable from the state being evaluated. This section presents an introduction to the process of generation and exploration of the state space comprised of all reachable states. Some terminology associated with generation and exploration of the state space is presented. This terminology is used in Sections 3.9 and 3.10 which describe the Depth First Tree Search and the Hybrid Tree Search respectively. The state in which all parts have successfully completed all processing steps is called the empty state. This state has the property L(i) =, 1 i W. The empty state is represented by s 0. Using s to represent some arbitrary state and s 0 to represent the empty state let S R represent the set of all states reachable from s. Determination of the safety of a part move request amounts to the determination of the safety of the potential assignment of parts that may be realized if the requested part move were to be allowed. This assignment of parts is established as s; the state to be examined for safety. This research attempts to establish safety of s by searching for the

70 empty state, s 0, in the set of states reachable from s, S R. Part move request is safe if s 0 S R and unsafe otherwise. 61 S R, the set of reachable states may be represented as a tree. Table 3.3 lists some definitions of basic terminology relating to trees that is used in subsequent discussion. Table 3.3 Basic Definitions relating to Trees Tree A connected, acyclic, directed graph consisting of nodes and arcs Root Node A node of the tree which has no arcs directed into it Rooted Tree A tree which has a root node Ancestor Any node y on the unique path from the root node r to another node x is called an ancestor of x Descendent If y is an ancestor of x, then x is a descendent of y Sub-Tree A sub-tree of a tree T is a tree comprised of a node in T and all of its descendents in T Parent If the last edge on the path from the root r of a tree T to a node x is (y, x), then y is the parent of x. Each node of a tree can have at most one parent. Root r is the only node in T with no parent Child If y is the parent of x, then x is a child of y Siblings If two nodes have the same parent, they are siblings Leaf A node with no children is called a leaf Degree the number of children of a node x in the rooted tree T is called the degree of x Depth The length of the path from root r to a node x is the depth of x in T Height the largest depth of any node in T is the tree height of T

71 The reachable state space, S R, of the state being evaluated, s, is a tree (T) such 62 that; 1. Each node of T represents some state S R 2. s forms the root node of T 3. Node v is said to be a child of node u if v is immediately reachable from u (i.e. v Γ(u)) and v is accepted by the state evaluation function E ( L, a) 4. Leaves of T are either dead-end states or the empty state. A dead-end state is, s d (say), is one for which either there are no immediately reachable states i.e. Γ (s d ) = or all immediately reachable states fail to satisfy the evaluation criteria i.e. no v Γ (s d ) is accepted by the state evaluation function E ( L, a) The tree T, corresponding to S R, thus forms the search space and the current state represents the root of T. Two different treesearch algorithms have been suggested for searching for an empty state in S R. The suggested algorithms can both be categorized as Generate and Test Procedures. Choppin [2004] defines a generate and test procedure, for searching trees, as one which generates each node in the search space and tests it to see if it is a goal node. Once the goal node is located, the search need not carry on further generation and testing of nodes. Nodes are generated and evaluated during the course of the tree search. Detection of an empty state, during tree search, establishes existence of an emptying sequence and

72 63 thus establishes safety of the requested part move. No further generation and evaluation is required if an empty state is found. Thus computing time is saved whenever the search succeeds before the entire tree has been constructed. The search procedure continues generating and evaluating states until either an empty state is found or until all states have been completely explored without finding an empty state. Exploration of a state refers to the process of generation and evaluation of states reachable from it. Each state may, at a certain stage of the search, be classified into one of the following categories: Completely Explored States: A state is said to completely explored if all states reachable from it have been generated and evaluated. Partially Explored States: A state is said to be partially explored if some, but not all, states reachable from it have been generated and evaluated. Unexplored States: A state is said to be unexplored if it has been generated but no states immediately reachable from it have yet been generated.

73 64 In this research the Flexible Manufacturing System is modeled using directed graphs, D = (V, A), where V represents the set of workstations and A represents some subset of pending part routings. The set of workstations, V, is assumed to be static and machine breakdowns are not considered. The set A, on the other hand, is dynamic and represents future transitions of the parts currently in the system. Different components of the deadlock avoidance solution presented here employ different methodologies for constructing A, the set of edges. 3.9 Depth First Search This research presents and compares two tree search algorithms for systematically generating and evaluating the set of reachable states. This section describes the Depth First Search (DFS) Algorithm. The section is organized into three sub-sections. Section describes the working of the Depth First Search Algorithm. Section presents an example application of the algorithm to determine the safety of a state. Section presents discussion on the ordering of parts in the root state as an initialization step of the DFS algorithm.

74 3.9.1 Depth First Search Algorithm 65 The Depth First Search (DFS) algorithm tries to recursively generate and evaluate as many reachable states as possible before it returns from a recursive call. Recursion stops if either all of the generated states have been found to be unacceptable by the state evaluation function or if no subsequent reachable states exist. As this point DFS steps out of the most recent recursive call to explore other reachable states. The DFS completes exploration of states in the inverse order of their generation. Discovery of an empty state forms a halting criteria and the algorithm terminates anytime an empty state is found during the search. A logical schema of the DFS algorithm is presented in Figure 3.10.

75 Figure 3.10 Flowchart of DFS Tree Search Algorithm 66

76 67 The search is initiated by establishing the root state and making a call to the recursive function by passing the root state as an argument. Pseudo-code for the calling function and the recursive function are presented in Figure 3.11 and Figure 3.12 respectively. The recursive function receives the state that is passed to it and attempts to generate and evaluate the immediately reachable states. Each part, in the set of parts, is examined to determine if the next workstation in its route is available. An immediately reachable state is generated when a next workstation is available. The state generation function, described in Section 3.4, is used to generate this state. The generated state reflects the state corresponding to the advancement of the part which is being examined to its next workstation. The generated state is then submitted to the state evaluation function. The state evaluation function determines if the state being evaluated is to be considered for further state exploration. Two different evaluation criteria have been used in this research. Either of these may be used for evaluating generated states. The generated state, if accepted by the state evaluation function, is passed as an argument to a recursive function call. This recursive function call takes up the generation and evaluation of subsequent reachable states.

77 68 If the generated state is not accepted by the state evaluation function, further examination of this state is discontinued. The search then continues with the examination of the remaining parts in the parts set. If all parts are examined and none of the subsequent sequences of part advancements leads to an empty state the current recursive function returns status of the current state as unsafe. If the calling function is another recursive function, then it continues examining other immediately reachable states. However the search terminates if the root state returns unsafe. Since all reachable states have been examined without reaching an empty state therefore the parts assignment corresponding to the root state is flagged as unsafe. On the other hand if examination of a state indicates it to be an empty state the search returns status of the state as safe. Detection of a safe state makes the recursion step out all the way to the calling procedure and return the status of the part move request as safe.

78 69 /* calling procedure */ L ROOT = j = 1 while j P current_workstn = R( j, a(j) ) L ROOT (current_workstn) = P(j) j = j + 1 end while call DF_Search(L ROOT, a) Figure 3.11 Pseudo code of the calling procedure of the Depth First Search

79 70 /* recursive function */ DF_Search (L STATE, a) j = 1 part_count = P while j P if a(j) < R(j) /* part p(j) not emptied at this stage of search */ next_workstn = R( j, a(j) + 1) if L STATE (next_workstn)= /*p(j) can advance to next step*/ a(j) = a(j) + 1 L CHILD = Γ (L STATE, a, j) /*call generate state function*/ eval_status = E(L CHILD, a) /*call state evalation function*/ if eval_status = ACCEPT /* generated state is accepted*/ status = DF_Search(L CHILD, a) a(j) = a(j) 1 if status = SAFE return SAFE end if end if else if eval_status = DO_NOT_ACCEPT a(j) = a(j) 1 end if end if end if else if a(j) = R(j) part_count= part_count 1 end if j = j + 1 end while if part_count 1 return SAFE end if else if part_count > 1 return UNSAFE end if end function Figure 3.12 Pseudo code of the recursive function of the Depth First Search

80 3.9.2 Discussion 71 As an example consider the depth first search exploration of the four workstation SU-RAS in which P, the set of parts, is comprised of three parts; P = [A, B, C]. Workstations are identified with numbers; W = [1, 2, 3, 4]. The routing of these parts is listed in Table 3.1. The root state is established to reflect assignment of part A to workstation 1, part B to workstation 2 and part C to workstation 3. Thus the allocation vector, L ROOT = [ A, B, C, φ] and the processing stage vector, a = [1, 1, 1]. The state information, (L ROOT, a), is passed to the recursive function as argument. The recursive function examines the first part in the set of parts P. Part A is the first part in P. In any state the location of the j th part in P may be determined as R( j, a( j)) and the workstation corresponding to its next processing stage may be determined as R ( j, a( j) + 1). The processing stage of the part A in the current state is a (1). Thus R (1,1 ) = machine 1 is the current location of part A in the root state and R (1,2) = machine 2 is its next processing location. However machine 2 is occupied and consequently is not considered further in this state. The recursive function then considers the next part in P. Part B is next part. The next machine of part B, machine 4, is determined to be available.

81 72 A child state is generated which corresponds to the part assignment corresponding to part B at its next processing station. The child state is labeled s 1. The allocation vector of s 1, Ls 1 = [A,, C, B] and the processing stage vector, a = [1, 2, 1]. This new state is evaluated by the state evaluation function. Two types of state evaluation methods have been developed in this research. This example employs the circuit detection algorithm for state evaluation. Thus the evaluated state is accepted if the circuit detection algorithm determines the state to be free of part flow deadlocks. The child state, s 1, does not have any part flow deadlocks and is consequently accepted by the state evaluation function for further exploration. The child state, s 1, is passed as an argument to the recursive function to continue further exploration. Figure 3.13 symbolically represents the depth first search tree that reflects the current stage of the search. Figure 3.13 Depth First Search Tree: s ROOT and s 1

82 73 In the state s 1, the parts B and C are blocked and only part A may be advanced to the next workstation in its route. A child state corresponding to this advancement is generated. The child state is labeled s 2. State s 2 has the allocation vector, Ls 2 = [, A, C, B] and processing stage vector, a = [2, 2, 1]. Upon evaluation s 2 is found to be free of any part flow deadlocks. The new child state, s 2, is then used to continue search in a depth first manner by calling the recursive function. In the newly generated state, s 2, part A is found to be blocked while both part B and part C can advance to their respective next destination processing steps. Advancement of part B, from workstation 4 to workstation 1 creates the child state that is labeled s 3. State s 3 has the allocation vector, Ls 3 = [B, A, C, ] and processing stage vector, a = [2, 3, 1]. Upon evaluation s 3 is found to have a part flow deadlock. Exploration of this state is not pursued. The state s 4, a child state the state s 2, is generated to reflect the advancement of part C from workstation 3 to workstation 1. State s 4 has the allocation vector, Ls 4 = [C, A,, B] and processing stage vector, a = [2, 2, 2]. Upon evaluation s 4 is found to be free of any part flow deadlocks. The child state, s 4, is passed as an argument to the recursive function DF_Search to continue further exploration. Figure 3.14 symbolically represents the depth first search tree that reflects the current stage of the search.

83 74 Figure 3.14 Depth First Search Tree Subsequent advancement of parts and corresponding creation and exploration of states leads to completion of parts A, C and B. The state generated after completion of the last part is the empty state. Detection of a state with one or less assigned parts is the halting condition for the depth first search. The state returns the status as SAFE and the search steps out of recursion until the SAFE status is returned to the calling function. Return to the calling function indicates completion of the search. The complete depth first search tree is represented in Figure 3.15.

84 Figure 3.15 Complete Depth First Search Tree 75

85 3.9.3 Ordering of Parts 76 Node selection in depth first search is trivial as there is only one node available for consideration; this is represented by s cur. Selected nodes are unexplored when the search is proceeding in a downward direction (increasing depth) but are semi-explored when the search back-tracks or steps back (decreasing depth). For any state s i, P ( s i ) is the ordered set of parts assigned to workstations in state s i. Let p represent the j th part in the set of parts P s ) which has route R( j) of length k = j ( i R ( j) and is at the a ) th ( j processing stage in the state i s. The number of pending resource requests for part p j in state s i is equal to k a( j). Ordering of parts is established once at the time of search initialization by ordering parts in P(s root ) based on the number of pending resource requests. The following three orderings were considered; Ascending, Descending, Un-Ordered The relative ordering of parts amongst each other is established at initialization and is not changed thereafter at any point during the search. Thus a search in which parts in P(s root ) are ordered in ascending order of resource requests will at any state s i, have the

86 same relative ordering of parts as in P(s root ), although the original ordering may no longer be ascending in number of pending resource requests in state s i. 77 Motivation for investigating these orderings is derived from the working of the depth first search. At any stage of tree search, the depth first procedure attempts to advance the first part it can advance from the subset of parts in the system that have not yet completed and exited at that stage of the search. It is hoped that placing parts with fewer pending requests at the head of this set of parts would lead to a quicker completion of some of these parts early on in the search and thereby provide more room for moving the remaining parts in the system. It was hoped that this would decrease the average total number of states explored before an empty state is found Hybrid Search This section describes the Hybrid Search (HS) algorithm for systematically generating and evaluating the set of reachable states. This section is organized into three sub-sections. Section describes the working of the hybrid search algorithm. Section presents an example application of the algorithm to determine the safety of a state. Section presents a discussion on the method of assigning scores to states in the hybrid search algorithm.

87 Hybrid Search Algorithm 78 Like the depth first search, the hybrid search (HS) begins by establishing the requested parts assignment as the root state. Root node is established as the state corresponding to the potential parts assignment that may be realized if the requested part move is allowed. Hybrid Search generates and evaluates states reachable from the root state using a state selection strategy that determines which of the unexplored states is to be explored next. Discussion of the HS algorithm requires definition of S GU, the set of generated but un-explored nodes. At some stage of the HS search a subset of the nodes in S R, the set of reachable states, has been generated. Let this subset be represented by S G. Further, at any stage of the HS, S G maybe partitioned into two sets; S GE, the set of generated nodes that have been explored, and S GU, the set of generated nodes that are un-explored at the given stage of HS Each generated state, when explored, is fully explored by generating all immediately reachable states. This precludes the possibility of any partially explored states. Thus generated states may only be either explored or un-explored.

88 79 HS algorithm maintains the set of all generated but unexplored states throughout the course of the search. This set of generated but unexplored states is represented by S GU. Every state in this set has a score assigned to it that is calculated after the generation of that state. The search proceeds by selecting the state with the lowest score in S GU. It is hoped that by applying this strategy, to select the next state for exploration, the search would rapidly converge to an empty state, if an empty state is reachable from the root state. A flowchart of the main procedure of the hybrid search is drawn in Figure 3.16.

89 Figure 3.16 Flowchart of Hybrid Search Procedure 80

90 81 Upon selection for exploration the state is removed from S GU. In the HS algorithm exploration of a state involves generation and evaluation of all states immediately reachable from it (the state being explored). Thus exploration of a state, in HS, comprises of the following steps. a. Generation of immediately reachable states. The process of generation of all immediately reachable states involves examination of each part in that state. An immediately reachable state is generated for every part that has its next processing location available. The state generation function, described in Section 3.4 and used in the DFS algorithm, is used to generate immediately reachable states. b. Evaluation of generated states. Every state generated is evaluated to ascertain if it is to be considered for further state exploration. Evaluation of states is performed by the state evaluation function. Usage of the state evaluation function is the same is in the DFS algorithm. A discussion of the state evaluation function is presented in Section 3.6. c. Computing and Assigning Scores. Upon evaluation states are found to be either acceptable or unacceptable for further exploration. Every state, determined to be acceptable, is assigned a score by the examining the information related with that state. The scoring method computes and assigns scores to some state s by computing three coefficients f, g, and h which are computed as;

91 82 f = minimum number of processing steps remaining for any part in the state s. g = number of parts that have not completed all processing steps in state s. h = number of parts blocked in state s. The score is computed as score(s) = f*n 2 + g*n + h, where n is the number of workstations in the system. A scoring function is used to compute the coefficients and the score. Every state that is assigned a score is examined for halting criteria. Any state which is either empty of has only one part is a safe state and satisfies the halting criteria. Detection of a safe state in this manner causes the search to halt and return status of the state as safe. After being assigned scores the states are added to S GU ; the set of generated but unexplored states. Once all states that are immediately reachable from the state being explored have been generated, evaluated and scored in this manner, the exploration of that state is considered to be complete. At this point the algorithm must determine which state to select for further exploration. The algorithm proceeds by selecting the state with the lowest score in S GU. The search terminates if at any stage S GU is found to be empty. This indicates that no further state exploration is possible and that all states reachable from the root state

92 have been explored without encountering the empty state. Search returns the status of the parts allocation being examined as unsafe. 83 The pseudo-code of the calling procedure, the scoring function and the function containing the HS algorithm are presented in Figure 3.17, Figure 3.18 and Figure 3.19 respectively. /* calling procedure */ L ROOT = j = 1 while j P current_workstn = R( j, a(j) ) L ROOT (current_workstn) = P(j) j = j + 1 end while call score_fn(l ROOT, a) S GU = /* clear S GU */ S GU = S GU (L ROOT, a) /* add root state to S GU */ call HS_Search( S GU ) Figure 3.17 Pseudo code of calling procedure of HS algorithm

93 84 /* scoring procedure */ score_fn(l, a) B = 0 /* number of parts assigned in (L, a) */ C = 0 /* number of parts blocked in (L, a) */ min_steps_remaining = M /* M is an arbitrary large number */ j = 1 while j P steps_remaining = R(j) - a(j) if steps_remaining > 0 then B = B + 1 if steps_remaining < min_steps_remaining min_steps_remaining = steps_remaining end if next_workstn = R( j, a(j) + 1) if L(next_workstn) /*next workstation occupied*/ C = C + 1 end if end if j = j + 1 end while A = min_steps_remaining score = A*n 2 + B*n + C return score end function Figure 3.18 Pseudo code of the scoring procedure

94 85 /* hybrid search procedure */ HS_Search(S GU ) while S GU min_score = M /* M is an arbitrary large number */ k = 1 while k S GU state_temporary = S GU (k) if state_temporary(score) < min_score min_score = state_temporary(score) state_selected = state_temporary end if end if end if j = j + 1 end while end while return UNSAFE end function S GU = S GU state_selected j = 1 while j P if a(j) < R(j) next_workstn = R( j, a(j) + 1) if L STATE (next_workstn) = a(j) = a(j) + 1 L CHILD = Γ (L STATE, a, j) eval_status = E(L CHILD, a) if eval_status = ACCEPT if (L CHILD, a) = s 0 end if a(j) = a(j) 1 /*empty state*/ return SAFE end if else if (L CHILD, a) s 0 score = score_fn(l CHILD, a) (L CHILD, a).score = score S GU = S GU (L CHILD, a) end if Figure 3.19 Pseudo code of the main procedure of HS algorithm

95 Hybrid Search Example 86 As an example consider the search for a reachable empty state, using the HS algorithm, for the parts set P = [A, B, C, D] which has the processing stage vector, a = [1,1,1,1] and whose routing is listed in Table 3.4. Table 3.4 Part Routings for HS Algorithm example Part A B C D The system considered in this example is a five workstation SU-RAS. The example assumes the state evaluation function employs the circuit detection procedure to evaluate states. The calling procedure establishes data structures for the parts set P, the parts routing R, and the processing stage vector, a. The allocation vector for the root state is computed as L ROOT. For this example; L ROOT = [B,, D, C, A].

96 87 Upon establishing the necessary data structures to represent the root state, a function call is made to the scoring function. The scoring function receives, as an argument, the state to which a score needs to be assigned. The root state is passed as an argument to the scoring function. The scoring function computes the number of remaining processing steps for parts A, B, C, and D as 3, 3, 3 and 2. Coefficient f of the score, which represents the minimum number of processing steps remaining for any part is thus set to two; the number of steps remaining for part D. Coefficient g of the score, which represents the number of parts that have not yet completed all processing states, has a value of four for the root state. Of these four parts, parts A, C and D have their next processing locations available, whereas part B is blocked. Thus, coefficient h, which represents the number of blocked parts is set to one. The score is computed as f*n 2 + g*n + h, where n is the number of workstations in the system. Since n = 5, substituting the values of the coefficients and n, the score of the root state calculated to be 71. The scoring function returns this score to the calling procedure where it is assigned to the root state. The HS algorithm maintains a list of generated but unexplored states, which is represented by S GU. The root state along with its score is added to the S GU as the final initialization step of the algorithm. At initialization the root state is the only state in S GU.

97 Upon addition of the root state to S GU a call is made to the main function of the HS algorithm. 88 As a first step, the algorithm must select the lowest score state in S GU. Since the root state is the only state in S GU, at this stage, it is selected for exploration. Once a state is selected for exploration it is removed from S GU since it no longer remains un-explored. The algorithm proceeds to generate, evaluate and score the states immediately reachable from the state selected for exploration. Three states are generated, one each for the advancement of the parts A, C and D to their respective next processing locations. Child states corresponding to advancements of parts A, C and D are labeled s 1, s 2 and s 3 respectively. Child states s 1 and s 3 corresponding to the advancements of parts A and D are evaluated and accepted by the state evaluation function. However the child state s 2 corresponding to the advancement of part C contains a part flow deadlock and is not accepted by state evaluation function. This state is dropped from further consideration. Scores for the accepted states are computed and assigned. The child state s 1, has a score of 73 and the child state s 3 has a score of 48. Both these states along with their scores are added to the list S GU and the main loop of the search returns to select the next state for exploration. The list S GU contains two states; s 1 and s 3. State s 3, having a lower score of 48 is selected for exploration and consequently removed from S GU. State s 3 has only one

98 immediately reachable state corresponding to the advancement of part B. This child state is generated by advancing part B from machine 1 to machine 3. This new state is generated and labeled s 4. Part D at machine 2 has a free path and may be removed from s4 without affecting the safety of s 4. Consequently part D is removed from s State s4 is found to be acceptable by the state evaluation function and is assigned a score of 65 by the scoring function. This state is added to S GU. Since s 3 has no other reachable states it is removed from S GU. The list S GU has two states: s 1 and s 4. State s 4 is selected for exploration since it has the lower score. Three states are generated, one each for the advancement of the parts A, B and C to their respective next locations. Child states corresponding to the advancement of the parts A, B and C are labeled s 5, s6 and s 7 respectively. Child state s 5 is found to be acceptable by the state evaluation function. Score of s 5 is determined to be 67 and s 5 is added to S GU. However the states s 6 and s 7 are not accepted by the state evaluation function and are dropped from further consideration.

99 Since s 4 has no other reachable states it is removed from S GU. The list S GU contains two states: s 1 and s 5. State s 5 is selected for further exploration since it has the lower score. 90 One immediately reachable state may be generated from s 5 by advancing part A to its next processing location. This state is labeled s 8.State s 8 is determined as acceptable by the state evaluation function and is assigned a score of 41 by the scoring function. This state is added to S GU. Since s 5 has no other reachable states it is removed from S GU. The list S GU contains two states: s 1 and s 8. State s 8 is selected for further exploration since it has the lower score. Two states are generated, one each for the advancement of the parts B and C to their respective next processing locations. Child states corresponding to the advancement of the parts B and C are labeled s 9 and s 10 respectively. Child state s 9 is determined to be un-acceptable by the state evaluation function and is dropped from further consideration. When the other child state, state s 10, is generated and examined for free paths it is found that part A, assigned to machine 5, has a free path. Thus part A may be removed

100 from s 10 without affecting the safety of s 10. Consequently part A is removed from s 10. Removal of part A from s 10 creates a free path for part C. Part C is also removed from s 10. Removal of part C from s 10 creates a free path for part B. Part B is also removed from s 10.Since there are no parts left in s 10 it is an empty state. Search terminates and returns the status of the requested part move as safe. 91 HS algorithm. Figure 3.20 depicts the tree generated as a result of the state exploration with the

101 Figure 3.20 Tree generated for the Hybrid Search example 92

102 Discussion of the Scoring Method 93 This section discusses the motivation and the consequences of employing the methodology adopted for scoring states. The quadratic expression f*n 2 + g*n + h is used to compute scores. Ensuing discussion presents a functional analysis of this expression. The expression for computing scores may be viewed as a sum of two subexpressions; f*n 2 and g*n + h. Any part that is advanced to its last processing step, in some state, can complete and exit the system without any subsequent inhibitions to its advancements. This result is used during generation of states, which correspond to the advancement of some part to its last processing step. The state generation function (Section 3.4) ignores parts at their last processing step and any such parts are not present in the parts assignment corresponding to the generated state. Thus the minimum number of remaining processing steps of a currently assigned part must at least be 1, this implies that the lower bound of coefficient f is 1. As a result, lower bound of the sub-expression f*n 2 = 1* n 2 = n 2. Any state in a n workstation SU-RAS, in which there are n parts assigned must be deadlocked and parts cannot be advanced without some deadlock resolution procedures.

103 94 States are scored only if found to be acceptable by state evaluation function. Thus the number of parts assigned in the state being scored must be less than n. Consequently the upper bound of coefficient g is n 1. Any state evaluated as acceptable by the state evaluation function must have at least one part that is free to advance. Thus the maximum number of parts blocked in a state being scored must be one less than the number of parts assigned in that state. Consequently the upper bound on coefficient h must be less than the upper bound of coefficient g. Since upper bound on coefficient g is n 1, therefore upper bound on coefficient h must be n 2. Further the upper bound of the sub-expression g*n + h must be equal to; (upper bound of g)* n + (upper bound of h) replacing the upper bounds of g and h in this expression we get n 2 2 as the upper bound for the sub-expression g*n + h. Since lower bound of f*n 2 is greater than the upper bound of g*n + h therefore the sub-expression f*n 2 is more significant (has greater contribution to the score) than the sub-expression g*n + h.

104 95 Consequently the state, which has the part with the least number of processing steps remaining, will have the lowest score. Such a state will be favoured for selection exploration by the HS algorithm. Since the number of blocked parts (represented by coefficient h), for every state being scored, must be less than the number of parts assigned (represented by coefficient g), therefore h < g. Thus if more than one states have the same value of their f coefficients of their score then coefficient g is more significant (has greater contribution to the value of the score) than coefficient h. Consequently if two or more states have the same values of their f coefficients then the state with the least number of parts assigned will have the lowest score. Further if two or more states have the same values of f and g coefficients then the state with the least number of blocked parts will have the lowest score. If two or more states in S GU are tied with the lowest score then the FIFO rule is used for selecting the next state for exploration. The first state in S GU with the lowest score is selected.

105 3.11 Repetitive Search in Look Ahead 96 Figure 3.21 Partial Repetitive Search Tree Figure 3.21 represents a partial tree corresponding to the exploration of some sequences of part moves from some state s 1. The nodes of the tree represent the processing stage vectors of the states that are represented by those nodes. Let s 1 represent a state in which m parts are assigned in a SU-RAS system comprised of n workstations and no buffers. Parts are assumed to have no choices in their processing sequences. Seven states, named s 1, s 2, s 3,, s 7, are drawn as arrays which depict the processing stage vectors (Section 3.3) for each of these states. Thus each of these states is an array of m elements where a k, the k th element 1 k m, is a positive integer that

106 97 indicates the processing step of the k th part in that state. If R k represents the route of the k th part in P, then 1 a k R k. Further processing stage of some part p k, in s 1, is given by a k. Starting at s 1, child states are constructed for every part that has its next workstation in its processing sequence available and the new state is accepted by the state evaluation function (Section 3.6). Consider p i, the i th part, at its a th i processing step to have its next workstation available in s 1. If the parts assignment resulting from advancing p i to its next workstation is accepted by the state evaluation function then the corresponding state is generated. This state, in which p i is at its (a i + 1) th processing step, is depicted as s 2 in Figure Similarly the child state s 3 is generated from s 1 to reflect the advancement of p j to its next ( (a j + 1) th ) processing step. Exploration of s 2 leads to generation of states s 4 and s 5 that correspond to advancements of the parts p i and p j respectively from their assignments in s 2. Exploration of s 3 leads to generation of states s 6 and s 7, which correspond to advancements of the parts p i and p j respectively from their assignments in s 3.

107 98 Upon examination, it is clear that states s 5 and s 7 represent the same parts assignment. Thus multiple instances of a certain parts assignment maybe encountered at different stages of the tree search. If at a certain stage of the search the entire sub-tree rooted at s 5 has been examined without discovering an empty state then s 5 is an unsafe state. If s 7 is discovered later during the search then further branching from s 7 would be redundant as s 7 will induce the same sub-tree as s 5. Repetitive searching of states in this manner is wasteful. A tabu list was used to avoid repetitive search. This tabu list contained states that were discovered to be unsafe during the course of the search. A state is added to the list if: (a) It is not accepted by the state evaluation function, or (b) The entire sub-tree rooted at this state has been examined without discovering an empty state. Every immediately reachable state generated by the state evaluation function is checked for a match in the tabu list. The state is submitted to the state evaluation function for further examination only if no match is found in the tabu list.

108 99 As an example, consider the five workstation SU-RAS in which three parts p 1, p 2, and p 3 are being serviced at workstations 1, 4 and 3 respectively. Table 3.5 lists the routes of these parts. Table 3.5 Part Routings for system state example Part p p p As can be observed from the routing, all three parts are at their first step. Thus the processing stage vector of the current parts assignment is given as a = [1, 1, 1]. Let this state be represented by s 1. The remainder of this section evaluates the safety of s 1 and illustrates how a tabu list is used to avoid repetitive state exploration. The DFS algorithm (Section 3.9) equipped with the circuit detection state evaluation function is employed to implicitly enumerate and evaluate states reachable from s 1. Parts in the state s 1 are examined. Part p 1 is found to have its next processing step available. The state created by advancing p 1 to its next processing step has processing stage vector [2, 1, 1] and is represented by s 2. A check is made for presence of [2, 1, 1] in the tabu list. No matches are found. State s 2 is evaluated with the evaluation function and

109 is found to be free of any part flow deadlocks. Thus s 2 becomes a candidate for further depth first exploration. 100 Parts in the state s 2 are examined. Part p 2 is found to have its next processing step available. The state created, by advancing p 2 to its next processing step has processing stage vector [2, 2, 1] and is represented by s 3. A check is made for the presence of [2, 2, 1] in the tabu list. No matches are found. State s 3 is evaluated with the evaluation function and is found to be free of any part flow deadlocks. Thus s 3 becomes a candidate for further depth first exploration. Parts in the state s 3 are examined. Part p 1 is found to have its next processing step available. The state created by advancing p 3 to its next processing step has processing stage vector [3, 2, 1] and is represented by s 4. A check is made for the presence of [3, 2, 1] in the tabu list. No matches are found. State s 4 is evaluated with the evaluation function and is found to be free of any part flow deadlocks. Thus s 4 becomes a candidate for further depth first exploration. Parts in the state s 4 are examined. Part p 3 is found to have its next processing step available. The state created by advancing p 3 to its next processing step has processing stage vector [3, 2, 2] and is represented by s 5. A check is made for presence of [3, 2, 2] in the tabu list. No matches are found. State s 5 is evaluated with the evaluation function and is found to be free of any part flow deadlocks. Thus s 5 becomes a candidate for further depth first exploration.

110 101 Parts in the state s 5 are examined. Part p 1 is found to have its next processing step available. The state created by advancing p 1 to its next processing step has processing stage vector [4, 2, 2] and is represented by s 6. A check is made for the presence of [4, 2, 2] in the tabu list. No matches are found. State s 6 is evaluated with the evaluation function and is found to have a part flow deadlock. The processing stage vector corresponding to the state s 6 is added to the tabu list. Search continues with examination of the remaining parts in s 5. Part p 2 is found to have its next processing step available. The state created by advancing p 2 to its next processing step has processing stage vector [3, 3, 2] and is represented by s 7. A check is made for the presence of [3, 3, 2] in the tabu list. No matches are found. State s 7 is evaluated with the evaluation function and is found to have a part flow deadlock. The processing stage vector corresponding to the state s 7 is added to the tabu list. Search continues with examination of the remaining parts in s 5. No further immediately reachable states exist. Since all states reachable from s 5 have been explored without finding an empty state, therefore the processing stage vector corresponding to s 5, a = [3, 2, 2], is added to the tabu list. The search steps out of recursive and returns to continue with further exploration of s 4. Search continues with examination of the remaining parts in s 4. No other immediately reachable states exist. Since all states reachable from s 4 have been explored without finding an empty state, therefore the processing stage vector corresponding to s 4,

111 a = [3, 2, 1], is added to the tabu list. The search steps out of recursive and returns to continue with further exploration of s Search continues with examination of the remaining parts in s 3. Part p 3 is found to have its next processing step available. The state created by advancing p 3 to its next processing step has processing stage vector [2, 2, 2] and is represented by s 8. A check is made for presence of [2, 2, 2] in the tabu list. No matches are found. State s 8 is evaluated with the evaluation function and is found to be free of any part flow deadlocks. Thus s 8 becomes a candidate for further depth first exploration. Parts in the state s 8 are examined. Part p 1 is found to have its next processing step available. The state created by advancing p 1 to its next processing step has processing stage vector [3, 2, 2] and is represented by s 9. A check is made for the presence of [3, 2, 2] in the tabu list. A match is found which corresponds to s 5. Since s 5 is known to be an unsafe state, therefore further evaluation of s 9 is not considered. Search continues with examination of the remaining parts in s 8. Part p 2 is found to have its next processing step available. The state created by advancing p 2 to its next processing step has processing stage vector [2, 3, 2] and is represented by s 10. A check is made for presence of [2, 3, 2] in the tabu list. No matches are found. State s 10 is evaluated with the evaluation function and is found to be free of any part flow deadlocks. Thus s 10 becomes a candidate for further depth first exploration.

112 103 The search continues in this manner. No sequence of part moves is found to lead to an empty state. Repetitive searching of states during the search is avoided. Figure 3.22 depicts the entire search tree. States are numbered in the order of their generation. Figure 3.22 Repetitive Search Example

113 3.12 Summary 104 Two tree search algorithms were presented along with examples. Two procedures used to evaluate states were presented with relevant examples. Repetitive generation and evaluation of states was identified as wasteful. A procedure to avoid repetitive search was presented. Algorithms and procedures presented in this chapter are part of the state space search procedures. Chapter 4 presents some procedures that may be applied before the state space search procedure. These procedures either evaluate the requested part move or may be instrumental in reducing the number of states that are subsequently searched by the state space search procedure

114 105 Chapter 4 PRE-SOLVE PROCEDURES 4.1 Introduction Although the maximally permissive approach assures acceptance of all safe moves, it is admittedly, not scalable (Lawley et al [1998]). This research endeavors to find ways to make this approach applicable to systems with larger number of resources. Some pre-solve procedures are presented in this chapter. These procedures attempt to reduce the search space and try to make a decision on the part move request. Whenever a pre-solve procedure makes a decision on the part move request the complete look ahead search (described in Chapter 3) is completely avoided. The proposed deadlock avoidance solution employs directed graphs to represent system state at any time. Two types of directed graph representations are used in this research. The directed graphs used to compute circuits were described in Section A different kind of directed graph is used to compute strongly connected components. This type of a directed graph is called a non-bounded directed graph and is described in Section 4.2.

115 106 Strongly connected components (SCCs) are employed to limit the look ahead search for deadlocks to only those parts lying in one SCC. Certain sufficiency conditions are also identified and are used to analyze the structure of the SCCs. Realization of some of these conditions may provide a basis for making decisions regarding safety of part move requests. The methodology to do this is described in Sections 4.3 through 4.6. Certain procedures used as part of the look ahead procedure can also be applied to reduce search space and detect part flow deadlocks before a call to the look ahead procedure is made. These techniques, described in Section 3.6 and Section 3.8, are briefly mentioned here to describe their implementation outside of the look ahead procedure. Section 4.8 describes the detection of circuits to identify part flow deadlocks and Section 4.7 describes application of Free Path Reduction outside and before the look ahead search procedure. Section 4.9 describes how the different pre-solve procedures are incorporated in one of the proposed deadlock avoidance algorithms.

116 4.2 Non-Bounded Graphs 107 A non-bounded system graph, D NB = (V, A NB ) is defined as: 1. V is the finite set of machines 2. A NB is the finite set of directed arcs representing the non bounded part transitions. Pseudo code for establishing the non-bounded graph is presented in Figure 4.1. A NB 0 i 1 while i m { part p i y l part } while y < k part { A NB A NB { R part (y), R part (y+1) } y y + 1 } i i + 1 Figure 4.1 Pseudo code for establishing non-bounded graph Non-bounded graphs contain all subsequent part transitions of all parts currently assigned to machines. Directed arcs are drawn between subsequent destination machines until the traversed part s last destination machine is encountered. This traversal is done

117 for all parts in the system and for the new part, a part requesting to be moved to its first destination location. 108 Consider the five machine (labeled M1, M2, M3, M4 and M5) FMS having four parts with part routings given in Table 4.1. Part A is requesting a move from its current assignment at machine M1 to its next destination, machine M5. Table 4.1 Part Routings for example Part Routing Steps A M1 M5 M2 M4 M3 B M2 M3 C M3 M5 M1 M4 M2 D M4 M5 M3 The non-bounded graph would be constructed by adding arcs for all subsequent part transitions for each of these parts. The non-bounded graph for this system is drawn in Figure 4.2. Machines are represented by similarly labeled vertices in the graph. Vertices are represented by circles and labels are placed adjacent to and outside of the circle. Vertices representing occupied machines are depicted by circles with the part label inside the circle while vertices representing un-occupied machines are depicted by empty circles.

118 109 Figure 4.2 Example of a Non-Bounded Graph 4.3 Computing Strongly Connected Components A strongly connected component is the maximal set of vertices that are mutually reachable from each other. Given a directed graph, G = (V, A), 1. V is the finite set of vertices

119 A is the finite set of directed arcs, A {( u, v) u, v V}.Although the general definition of an SCC which has been presented here allows for self loops (directed arcs originating and terminating at the same vertex), self loops are not modeled in this thesis. Consequently, in this thesis A is the set of directed arcs between vertices such that A {( u, v) u, v V and u v} A path from vertex v 0 to v k in G is an alternating sequence (v 0, (v 0, v 1 ), v 1, (v 1, v 2 ), v k-1, (v k-1, v k ), v k ) of vertices and edges that belong to V and A respectively. Two vertices v and w, in G, are path equivalent if there is a path from v to w and a path from w to v. Path equivalence partitions V into maximal disjoint sets of path equivalent vertices. These sets are called the strongly connected components (SCCs) of the graph. Figure 4.3 Strongly Connected Components

120 111 Figure 4.3 is used to illustrate the concept of SCCs. Vertices A, B, C, E are all mutually reachable, i.e. there exists at least one path of traversal along the direction of the arcs between any of these two vertices. Vertices A, B, C and E form one SCC. D and F each separately form single vertex SCCs. The algorithm for finding SCCs of a graph, G = (V, A) uses the transpose of G, which is defined to be the graph G T = (V, A T ), where A T is the set of arcs of A reversed in their directions, thus A T = {(u, v): (v, u) A}. The algorithm computes SCCs in linear time (Θ(v+e); where v is the number of vertices and e is the number of arcs in the graph) using two depth first searches, one on G and one on G T. 1. call DFS(G) to compute finishing times, f(v), for each vertex v. 2. compute G T 3. call DFS(G T ), but call vertices in order of decreasing f(u) (as computed in Step 1) 4. output vertices of each tree in the depth first forest of step 3 as a separate SCC After identifying all SCCs of the graph, the condensation of all of the SCCs can be drawn. The SCCs of a strongly connected graph can be collapsed to a single vertex

121 yielding a new directed graph that has no cycles. The acyclic graph obtained is called a condensation of the directed graph whose vertices were collapsed. 112 Let C 1, C p be the strong components of G. Condensation of G is denoted by G c and is the directed graph G c = (V c, A c ), where V c has p elements C 1, C p and C i C j is a directed arc in A c iff C j. i j and an edge in A from some vertex in C i to some vertex in The condensation for the directed graph of Figure 4.3 is depicted in Figure 4.4. The condensation is always acyclic, i.e., is without any circuits. This property is employed for reducing the problem size as described in Sections 4.4 and 4.5. Figure 4.4 Example of a Condensation Graph

122 4.4 Theoretic Validity of SCC Reduction 113 When dealing with large problems, researchers sometimes try to partition the problem and work on smaller partitions of the original problem. Strongly connected components provide a basis for partitioning the deadlock avoidance problem. The property of mutual acyclicity of the SCCs forms a basis for limiting the deadlock search effort. The discussion that follows establishes some definitions and results that attempt to exploit the acyclicity of SCCs to reduce the problem size. The remainder of this section is organized into four subsections. Section presents some concepts and notation used in subsequent discussion. Section presents a series of results in the form of four lemmas. These lemmas establish relationships between the safety (or the lack thereof) of the state being evaluated and the safety (or the lack thereof) of the individual subsets of parts residing in each of the SCCs in that state. Section utilizes results from Section to devise a procedure which sequentially attempts to remove parts residing in each of the SCCs.The procedure is named EMPTY-LOWEST-LABEL-SCC (ELLS) algorithm. A description of the ELLS algorithm is followed by a proof establishing its correctness.

123 114 Finally Section builds on the preceding results and presents a proof which establishes logical equivalence between the safety (or the lack thereof) of the state being evaluated and the safety (or lack thereof) of a subset of parts residing in a single SCC only Introduction Let s i represent the currently examined state, P(s i ) represent the set of parts assigned to machines in the state s i, D(s i ) represents the non-bounded directed graph representation (Section 4.2) of the assignment of parts and their pending transitions in the state s i, C(s i ) represents the set of strongly connected components in D(s i ). If there are k SCCs in D(s i ) then C(s i ) = { c 1, c 2, c k } Let P(c j ) represent the set of parts assigned, at state s i, to the set of machines comprising the SCC c j C(s i ). It may be noted that; P(c j ) P(s i ) c C s ), P k U j ( i ) I P( c ) = φ, c, c C( s ), where j l (disjoint sets), ( c j l j l i j= 1 P ( c ) = P( s ) j i

124 115 Any new state created by removing some parts from a state is referred to as a subset state of the original state. Creation of subset states in this manner is formally referred to, in the remainder of this chapter, as the state reduction operation. State reduction operation is represented by the \ operator. This operation takes two operands in the following format; (lh operand) \ (rh operand) The lh operand is a state from which the subset state is created. If s j (say) is the state which is the lh operand and P(s j ) is the set of all parts assigned to machines in the state s j, then the rh operand can be any set of parts, P y (say), such that P y P(s j ). Thus the state reduction operation s j \ P y represents the subset state generated by removing all parts in P(s j ) - P y from the state s j. Assignments of all parts in the subset state are the same as their assignment in s j. Let s i be some state and let P(s i ) = {A, B, C, D, E, F} be the set of parts in s i, and let P y = {B, C, D}, be a subset of P(s i ). Then s i \ P y represents the subset from which all parts in P(s i ) - P y = {A, F, E} have been removed and assignment of parts in P y remains the same as it was in s i. As an illustration of the state subset consider the state s i to be a state of an eight machine SU-RAS. Machines have unit capacities and are identified with numbers; W = {1, 2, 3, 4, 5, 6, 7, 8}. The routing of these parts is listed in Table 4.2. It is assumed that

125 116 parts are at their respective first processing stage, thus the processing stage vector, a = {1, 1, 1, 1, 1, 1}. Table 4.2 Part Routings for State Subset Example Part A B C D E F The original state s i is represented in Figure 4.5 (a) and the subset state, s i \ P y, is represented in Figure 4.5 (b). Figure 4.5 Example of State Subset

126 117 Let s i be any state and let P(s i ) be the set of all parts assigned in s i. It is noted that if P x = φ, then s i \ P x = s 0, the empty state. Else if P x = P(s i ), then s i \ P x = s i. Given two states s a and s b ; where s b is reachable from s a. M(s a s b ) represents any feasible sequence of moves that lead a SU-RAS system in state s a to state s b. M(s a s b ) is defined only if s b is reachable from s a. In particular the emptying sequence M(s a s 0 ) can be defined for every SAFE s a. All part moves comprising the sequence of moves must be feasible. A part move is feasible if; a) conditions 1, 2 and 3 listed in Section 1.1 are observed b) unit capacity at processing locations is respected c) transition follows the sequence of processing requirements in the part route Lemmas The property of path equivalence can be used to partition directed graphs into individual SCCs (Section 4.3). Four lemmas are presented in this sub section. These lemmas establish results that relate the safety (or lack thereof) of some state to the safety of individual SCCs in that state. These results are used in sub sections and

127 Lemma 4.4.1: Every subset state of a SAFE state is SAFE. 118 PROOF: Let s i be the given SAFE state. Safety of s i implies s 0 is reachable from s i by advancing parts in P(s i ), from their assignments in s i to subsequent resources in their request structure in a certain sequence. Let M(s i s 0 ) represent one sequence of part moves that leads the system from state s i to s 0 the empty state. Further let M(s i s 0 ) be organized as a list of the emptying part moves. Further let P GHOST represent some subset of P(s i ) which defines a the subset state s i \ [P(s i ) - P GHOST ]. Consider an operation on M(s i s 0 ) which deletes all moves of parts P GHOST. Let the list trimmed by this operation be represented by M(s i s 0 ) Δ P GHOST. The sequence of part moves in M(s i s 0 ) Δ P GHOST forms a an emptying sequence for s i \ [P(s i ) - P GHOST ]. This may be visualized by assuming parts in P GHOST to be present as ghost parts that are advanced using ghost moves as per the sequence defined in M(s i s 0 ) along with parts in P(s i ) P GHOST. Since an emptying sequence can be constructed for every subset state; it can be concluded that every subset state of a SAFE state is SAFE. Since every SCC, c j, of the C(s i ) contains a (possibly empty) set of parts represented by P(c j ) the above result maybe interpreted, in the context of SCCs, to

128 119 apply to subsets of P(s i ) which lie in the various SCCs of D(s i ). Thus the result might be stated as if s i is a known SAFE state then every subset SCC state; s i \ P(c j ), corresponding to each of c j C(s i ), must also be SAFE * * Let P REMOVED (s i ) represent some subset of parts in P(s i ) that has an emptying sequence of part moves which does not require movement of any parts from P(s i ) P REMOVED (s i ). Thus all parts in P REMOVED (s i ) maybe advanced through all their subsequent resource assignments to completion without requiring any movement of parts in P(s i ) P REMOVED (s i ) from their assignments in s i. As an example the parts A, B and C have a sequence of moves which allows them to complete their remaining processing requests and leave the system without requiring the remaining parts (D, E and F) to be moved from their assignments in s i. Lemma 4.4.2: Let P(s i ) be the set of parts assigned in state s i and let P REMOVED (s i ) be a (possibly empty) subset of P(s i ). If all parts in P REMOVED (s i ) can complete their remaining processing requests and leave the system without requiring any movement of parts in P(s i ) - P REMOVED (s i ) from their assignments in state s i then removing parts P REMOVED (s i ) does not affect the safety (or lack thereof) of remaining parts in the system (P(s i ) - P REMOVED (s i )). In terms of the state reduction operation (Section 4.1.1), it can be stated that if state s i is safe (unsafe) then the subset state (Section 4.1.1) represented by s i \ [P(s i ) - P REMOVED (s i )] must also be safe (unsafe).

129 PROOF: Proof is comprised of two cases depending on whether s i is SAFE or not. 120 CASE I: s i is SAFE Safety of s i implies existence of M(s i s 0 ). Correspondingly safety of s i \ [P(s i ) - P REMOVED (s i )] can be be established if existence of M([s i \ [P(s i ) - P REMOVED (s i )]] s 0 ) can be established. Proceeding as in proof of Lemma 4.4.1; it is easy to see that M([s i \ [P(s i ) - P REMOVED (s i )]] s 0 ) can be constructed by deleting from M(s i s 0 ) all moves of parts P REMOVED (s i ). Thus if an emptying sequence of parts exists for s i then a corresponding emptying sequence can be constructed for every s i \ [P(s i ) - P REMOVED (s i )]. Thus removal of parts P REMOVED (s i ) does not affect safety of a given SAFE state. CASE II: s i is UNSAFE A state is UNSAFE if no sequence of part moves can empty all parts in P(s i ). At least two parts are required for a deadlock; this implies that two or more parts from P(s i ) will remain assigned to some machine, regardless of the sequence of part moves adopted. Parts P REMOVED (s i ) can be removed from the system in some sequence of part moves. Per definition of an UNSAFE state presented above no sequence of part

130 121 moves can empty more than P(s i ) 2 parts. Since parts P REMOVED (s i ) are removed using a permissible sequence of part moves (per definition of P REMOVED (s i )) therefore two or more parts will remain after any choice of subsequent sequence of part moves. This implies that s i \ [P(s i ) - P REMOVED (s i )] can not be emptied completely. Thus if s i is UNSAFE, then every s i \ [P(s i ) - P REMOVED (s i )] is also UNSAFE. Since the proposition of Lemma is correct for both cases and no other cases exist therefore the proposition must be correct * * Lemma 4.4.3: Let s i be a state and C(s i ) be the set of strongly connected components of the non-bounded graph representation of s i. Let c j C(s i ) be a SCC and let P(c j ) represent the set of parts assigned to vertices belonging to c j. If assignments of parts in P(c j ) does not have an emptying sequence (is UNSAFE) then s i must also be UNSAFE. Thus, in terms of the state reduction operation; if the subset state s i \ P(c j ) is UNSAFE, for any c j C(s i ), then s i must also be UNSAFE.

131 PROOF: The state subset s i \ P(c j ) is given to be UNSAFE. This implies that no sequence of part moves can lead from s i \ P(c j ) to s 0, the empty state. 122 Since safety of a state is contingent on all parts being able to complete and exit the system; any state in which some subset of parts cannot empty is therefore UNSAFE * * Cormen et al [2001] define a topological sort of directed acyclic graph, G = (V, A) as a linear ordering of all its vertices such that if G contains an edge (u, v) then u appears before v in the ordering. If a directed graph is not acyclic, then no linear ordering is possible. A depth first search algorithm is used to perform the topological sort of the SCCs of the condensation graph of D(s i ). The labeling strategy examines every pair of distinct vertices u, v V; if there exists a directed arc from u to v labels f[u] and f[v] are assigned to u and v such that f[v] < f[u]. Detailed description of a topological sort algorithm, along with proofs establishing its correctness, can be found in Cormen et al [2001]. If this labeling strategy of the topological sort assigns labels f[u] and f[v] to vertices u, v V and f[v] < f[u] then vertex v is said to be downstream of the vertex u if v is reachable from u. Correspondingly if v is downstream of the u then u is said to lie upstream of v.

132 123 Lemma 4.4.4: If some state s i is UNSAFE then at least one subset SCC state, s i \ P(c j ), where c j C(s i ), must also be UNSAFE. PROOF: The proof is by contradiction. Contradictory proposition: state s i can be UNSAFE even if all subset SCC states s i \ P(c j ), where c j C(s i ), are SAFE. If every subset SCC state s i \ P(c j ), is SAFE then there must exist an emptying sequence: M([s i \ P(c j )] s 0 ) for each c j C(s i ). Further since the condensation digraph of D(s i ) (Section 4.3) is a directed acyclic graph, its vertices, the SCCs C(s i ), can be ordered in a linear ordering. Once SCCs C(s i ) are linearly ordered they can be sequentially emptied starting from the lowest label SCC until all SCCs in C(s i ) have been emptied, i.e. P REMOVED (s i ) = P(s i ), and s i \ [P(s i ) - P REMOVED (s i )] = s 0 ; the empty state. Since it is possible to construct an emptying sequence of moves, M(s i s 0 ), based on the assumption of safety of subset SCC states therefore the contrary proposition must be false. This establishes the original proposition of Lemma * *

133 124 The method of sequentially emptying all SAFE subset SCC states is articulated into a formal algorithm. The algorithm is named EMPTY-LOWEST-LABEL-SCC (ELLS) algorithm. This algorithm is presented in the subsequent sub section ELLS Algorithm ELLS algorithm is applied on the labeled condensation digraph of D(s i ) in which the SCCs have been labeled using the topological sort labeling strategy described earlier. ELLS sequentially attempts to remove P(c j ), where c j C(s i ) starting with the c j having the lowest label. Let P PRESENT (s i ) represent, at any step of the ELLS algorithm, the set of parts not yet removed from P(s i ). Pseudo-code describing the ELLS algorithm is presented in Figure 4.6.

134 125 INITIALIZATION: set P PRESENT (s i ) = P(s i ) MAIN PROCEDURE: 1. Select the lowest-label, non-empty SCC c j (say) from C(s i ) 2. For the state s i \ P(c j ) determine if an emptying sequence of part moves exists. This maybe done using an exact method such as the DFS tree search method described in Section 3.3. If no emptying sequence exists then QUIT and return UNSAFE, else continue. 3. set P PRESENT (s i ) = P PRESENT (s i ) P(c j ) 4. If P PRESENT (s i ) = ; QUIT and return SAFE, else goto STEP1. Figure 4.6 Pseudo code for the ELLS algorithm Theorem 4.4.1: ELLS algorithm correctly evaluates states. PROOF: The proof of this proposition is established by establishing propositions corresponding to the two ways in which ELLS may incorrectly evaluate some state. These are presented and established as follows; PROPOSITION - I: ELLS does not flag an UNSAFE state as SAFE. Given that state is UNSAFE, it may be recalled that Lemma states that at least one subset SCC state must be UNSAFE.

135 126 Since ELLS attempts to establish safety of the state by establishing safety of all SCCs therefore existence of an UNSAFE SCC will cause ELLS to flag the state as UNSAFE. Hence ELLS does not flag UNSAFE states as SAFE. PROPOSITION - II: ELLS does not flag a SAFE state as UNSAFE. Given that the state is SAFE, it maybe recalled that Lemma states that all subsets of a SAFE state are SAFE. This implies that each subset SCC state is safe. Since ELLS can empty all SAFE SCCs; therefore upon examination of all SCCs it will be able to empty all SCCs. Hence ELLS does not flag SAFE states as UNSAFE. ELLS can be correct only if it satisfies both of the above cases. Since validity of ELLS is established for both cases therefore the proposition of Lemma must be correct * *

136 4.4.4 Proof of Validity of SCC Reduction 127 After a part finishes receiving processing at its current location it requests to be moved to its next processing location. The SCC to which this next processing location belongs is called the destination SCC. If destination SCC of some state s i is represented by destscc(s i ) then clearly destscc(s i ) C(s i ) and P(destSCC(s i )) P(s i ). A SCC that has upstream SCCs but no downstream SCCs in the condensation digraph is called a sink SCC. The SCC with the lowest label, assigned by the topological sort (described earlier) is always a sink SCC. There may however be more than one sink SCCs in the condensation digraph. Theorem 4.4.2: In any state, s j (say) it suffices to examine only the parts assigned to the processing locations which lie in the destination SCC, P(destSCC(s j )), in the subset state s j \ P(destSCC(s j )) to determine safety of s j. PROOF: Given that; (a) State of the system before current part move request was SAFE. Let this previous state be represented by s i and let s j represent the state corresponding to the current part move request. (b) Assignment of all parts except that of the part which is requesting the move is unchanged from s i ; the previous SAFE state.

137 128 Based on the assumptions (a) and (b) the proof proceeds by establishing two cases depending on whether or not the destscc(s j ) is a sink SCC. CASE I: destscc(s j ) is a sink SCC. This case is discussed by separately considering the two sub-cases. Therefore the property of path equivalence can be used to partition directed graphs into individual SCCs (Section 4.3). State s j maybe viewed to be composed of two subset states, namely; s j \ P(s j ) P(destSCC(s j )) and s j \P(destSCC(s j )). Since destscc(s j ) is a sink SCC, the subset state s j \ P(destSCC(s j )) can be examined for existence of an emptying sequence of part moves without requiring any moves of parts P(s j ) P(destSCC(s j )). SUB-CASE I-A: destscc(s j ) is a sink SCC and upon examination the subset state corresponding to the destscc(s j ): s j \ P(destSCC(s j )) is found to be SAFE. Safety of s j \ P(destSCC(s j )) implies that all parts P(destSCC(s j )) can be emptied. Further since destscc(s j ) is a sink SCC all P(destSCC(s j )) can be emptied without requiring any moves of parts in other SCCs; P(s j ) P(destSCC(s j )).

138 129 Once all parts P(destSCC(s j )) have been emptied the corresponding state can be represented by s j \ [P(s j ) P(destSCC(s j ))]. It maybe noted that s j \ [P(s j ) P(destSCC(s j ))] is a subset state of s i. Since s i is a known SAFE state and every subset of a SAFE state is also SAFE (Lemma 4.4.1) it can be concluded that s j \ [P(s j ) P(destSCC(s j ))] must be SAFE. This implies existence of an emptying sequence of part moves, represented as M( [s j \ [P(s j ) P(destSCC(s j ))]] s 0 ). Thus emptying sequences exists for both s j \ P(s j ) P(destSCC(s j )) and s j \P(destSCC(s j )) and all parts P(s j ) can be emptied. Hence if s j \ P(destSCC(s j )) is determined to be SAFE it can be concluded, without further examination, that s j must also be SAFE. SUB-CASE I-B: destscc(s j ) is a sink SCC and upon examination the subset state corresponding to the destscc(s j ): s j \ P(destSCC(s j )) is found to be UNSAFE. Since s j \ P(destSCC(s j )), a subset of another state, is found to be UNSAFE and any state with a subset that is UNSAFE (Lemma 4.4.3) must also be UNSAFE, it can be concluded, without further examination, that s j must be UNSAFE.

139 130 Thus in both sub cases (when destination SCC is a sink SCC) it suffices to examine only the state subset corresponding to the destination SCC; s j \ P(destSCC(s j )). CASE II: destscc(s j ) is not a sink SCC. Let P DOWN [destscc(s j )] represent the set of parts occupying vertices that comprise the SCCs downstream of destscc(s j ) in the condensation digraph of D(s j ). Thus, P DOWN [destscc(s j )] does not include P(destSCC(s j )). All parts P DOWN [destscc(s j )] can be emptied since s j \ P DOWN [destscc(s j )] is a subset of a known safe state; s i (Lemma 4.4.1). Consider the state in which all parts P DOWN [destscc(s j )] have been emptied, let this be represented as s j \ [P(s j ) - P DOWN [destscc(s j )]]. The destination SCC, destscc(s j ), now becomes the sink SCC. This reduces CASE II to CASE I; and as before it would suffice to examine only s j \ P(destSCC(s j )) in order to determine the safety (or lack thereof) of s j. Since destscc(s j ) can either be a sink SCC or not and no cases other than CASE I and CASE II exist, therefore proposition of Theorem must be true * *

140 131 The next section describes how the results established in this section are applied to reduce the deadlock search effort. 4.5 Application of SCC Reduction As established in Section 4.4, it is sufficient to search for the existence of an emptying sequence only within the subset state corresponding to the destination SCC. The destination SCC is the smallest partition of parts affected by the currently requested part move. The non-consideration of vertices, parts, and arcs lying outside the destination SCC reduces the deadlock search effort and is called SCC Reduction. Consider the five machine FMS having four parts with part routings given in Table 4.3. Part B has finished receiving processing at machine 1 and requests to be moved to machine 3. This request is currently being examined. Table 4.3 Part Routings for SCC Reduction example Part Routing Steps A 2 5 B C 4 1 D 5 3 1

141 132 Figure 4.7 depicts the non bounded graph associated with the part move request. The graph is analyzed for its strongly connected components. Vertices 3, 4 and 5 form one SCC and vertices 1 and 2 separately form single vertex SCCs. Figure 4.7 SCC Reduction Example Original Graph The graph after SCC Reduction is depicted in Figure 4.8. Figure 4.8 SCC Reduction Example Reduced Graph after SCC Reduction

142 4.6 SCC Analysis 133 After SCC reduction, the destination SCC is examined and a procedure counts the number of vertices, arcs and parts in this SCC. Let: n be the number of vertices in the SCC being examined, m be the number of parts occupying vertices of this SCC, and e be the number of edges between vertices of this SCC. SCC Analysis is comprised of the evaluation and categorization of conditions that correspond to certain relationships between n, m, and e. Some of these conditions provide a basis for making conclusive decisions regarding safety of part move requests. The remainder of this Section is organized in three subsections. Section presents all relationships and the corresponding sufficient conditions. The condition m < n = e is determined to provide a basis for evaluating the part move request as a safe move. The later two sub sections establish the validity of this result. Section puts forth some definitions and preliminary results that are used to establish the result. Proof of the safety of this condition is presented in Section

143 4.6.1 Relationships Between n, m and e 134 Depending on the relationship between values of n and m one of the following three possible conditions must exist; (a) n < m This research presents a deadlock avoidance solutions for a SU- RAS system with unit capacity machines (Section 2.1). Since the machines have unit capacities therefore the condition n < m is not feasible for the systems modeled in this research. (b) n = m This condition implies that one or more circuits, comprising the SCC being examined, are fully populated. As such any part move request resulting in the realization of this condition must be unsafe. (c) n > m This condition is possible. No conclusive results could be derived based only on the condition n > m. The three sub cases which exist in this condition are listed as follows; (i) e < n This case is not possible. The discussion that follows presents two definitions and a proof that establishes this result.

144 135 The out-degree of a vertex v is the number of arcs originating at v. The in-degree of a vertex u is the number of arcs terminating at u. Lemma Any sub graph with n sub vertices and e sub arcs can not be an SCC if e sub < n sub. Proof: By definition of strongly connected components, every vertex of an SCC is reachable from every other vertex. Therefore every vertex must have indegree 1 and, every vertex must have outdegree 1. Since there are n vertices; then clearly the number of arcs, e, must be n. This proves that any sub graph with e sub < nsub can not be an SCC. (ii) e > n No conclusive decision could be derived for this sub case. Part move requests meeting this condition are submitted to the tree search methods described in Section 3.9 and Section 3.10.

145 136 (iii) e = n This condition is determined to indicate safety of the part move request being evaluated with SCC Analysis. Proof of this result requires some definitions and Lemmas. These are presented in Sections and Definitions and Some Results Definitions of some terminology and proof of some results are presented as preliminaries for establishing the proof of the safety of m < n = e (proof presented in Section 4.6.3). A vertex of the SCC being examined is said to be a shared vertex if it is shared amongst two or more circuits. Similarly an arc of the SCC being examined is said to be a shared arc if it is shared amongst two or more circuits. Let CSCC be the set of all simple circuits in some SCC. If the SCC consists of a single circuit only ( C SCC = 1) then clearly it has no shared vertices or shared arcs.

146 137 Conversely all vertices and arcs that exist in one circuit only (are not shared amongst two or more circuits) are said to be non-shared vertices and non-shared arcs. Lemma 4.6.2: C SCC > 1 implies existence of one or more shared vertices. PROOF: Proof is by Contradiction. Contradictory statement: There exists some SCC which has C SCC > 1 and all circuits in C SCC are vertex disjoint (circuits do not have any shared vertices) Consider a SCC which has C SCC > 1 and all its circuits are vertex disjoint i.e. circuits do not have any shared vertices. Let c a and cb be two circuits in this SCC. Since c a and c b belong to the same SCC therefore vertices of c a must be reachable from vertices of cb and vice-versa. Thus some vertex v i of c a must be on a path which includes some vertex of c b, say v j. Further, for mutual reachability, some vertex v k of c b must be on a path which includes some vertex vl of c a. Since v i and v l both belong to c a, therefore there must be a path from v l to v i. Similarly there must be a path from v j to v k in c b.

147 138 This implies existence of a circuit which maybe traversed beginning with v i ---> v j ---> vk ---> v l ---> v i (where the notation ---> implies existence of a path from the vertex at the tail end of the dotted arrow to the vertex at the tip of the arrow. Let this circuit be referred to as c c. However c c has vertices in common with both c a and c a. This contradicts the contradictory statement and thus establishes existence of shared vertices between circuits of an SCC wherever SCC C > * * segments. The SCC is considered to consist of a set of shared segments and non-shared A sub-graph of an undirected graph is said to be connected if there exists a path from every vertex in the sub-graph to every other vertex in the same sub-graph. For some directed graph G = (V, A), a subset s of V is said to be connected if the underlying undirected graph of s is connected. A connected component of G is a maximum connected subset s of V that strictly contains s. A non-shared segment (NS Segment) consists of a connected component, a leading arc and a trailing arc. The component and the leading and trailing arcs are unique

148 to one circuit only (not shared among multiple circuits). A NS Segment can have an empty component; in such a case the NS segment consists of a single arc only. 139 The component of the non-shared segment is a simple path from the first vertex (on which the leading arc of the NS Segment is incident) to the last vertex (from which the trailing arc of the NS Segment originates). As a consequence every vertex of some non-empty component of a NS Segment has in-degree = out-degree = 1. Lemma 4.6.3: Let ns be some NS Segment of a SCC which consists of two or more circuits ( C SCC > 1, where C SCC is the set of simple circuits comprising the SCC). If ns has nns vertices and ens arcs then e ns = n ns + 1. PROOF: By definition all vertices of ns are included in the component of ns. A component with n ns vertices must have nns 1arcs. Accounting for the leading and the trailing arcs we get e ns = ( n ns 1) + 2 = ( n ns +1). This result is valid for all NS Segments in SCCs which have C SCC > 1. If C SCC = 1 then e ns = since all arcs and vertices are non-shared * * n ns ; A shared segment (S Segment) is a connected component consisting exclusively of vertices and arcs shared amongst two or more circuits. At a minimum a shared segment may contain one vertex. Unlike NS Segments, shared segments begin and end with vertices. Clearly, if some SCC has C SCC = 1 then it has no S Segments.

149 140 Figure 4.9 depicts three separate strongly connected subgraphs in which two or more circuits share some vertices and arcs. The shared vertices in each of these strongly connected components are represented by solid circles whereas non-shared vertices are represented by unfilled circles. The shared arcs are represented by discontinuous lines whereas the non-shared arcs are represented by continuous unbroken lines. Figure 4.9 Examples of Shared and Non-Shared Segments There is only one shared segment in each of the three SCCs depicted in Figure 4.9. The shared segment in Figure 4.9 (a) is a circuit. The SCC, in this case, has six vertices, three of which are shared and the other three are non-shared. There are a total of nine arcs, of which three are shared and the remaining six are non-shared. This SCC has three NS Segments, each of which contains one vertex and two arcs. The S-Segment in Figure 4.9(b) consists of four vertices and three arcs. There are four N Segments in this SCC, two of which are contain single arcs only while each of the two remaining N Segments is consists of one vertex and two arcs. The SCC in Figure 4.9(c) has one S

150 Segment consisting of two vertices and one arc. This SCC has three NS Segments, each of which consists of one vertex and two arcs. 141 A circuit is said to be a simple circuit if it can not be decomposed into smaller circuits. Any circuit which is not simple can be decomposed into smaller circuits. A nonsimple circuit C, consisting of the vertex set V and the arc set E, can be decomposed into two or more simple circuits formed strictly from some subsets of V and E. Lemma 4.6.4: If a simple circuit has n vertices and e arcs, then e = n. PROOF: (Proof is by Induction) Let the proposition P(n) be an implication which states that every simple circuit with n vertices and e arcs has e = n. If P(n) is true; prove that P(k) is true for any non-negative integer k. Let P(1) be the proposition that every simple circuit consisting of a single vertex and e arcs has e = 1. Let C one be some simple circuit consisting of a single vertex and let V one be the set of its vertices and Eone be the set of its arcs. For C one to be a simple circuit, it must have an arc beginning from and ending at the solitary vertex. Thus if V one = { v 1 } then arcs can be drawn from v 1 to v 1. This arc is represented by ( v 1, v 1 ). Presence of more than one instance of the arc ( v 1, v 1 ) implies a non-simple

151 circuit. 142 E one = {( v 1, v 1 )} is the only possible simple circuit and clearly e = E one = 1. This establishes the truth of the proposition P(1). Let P(k) be the proposition that every simple circuit with k vertices and e arcs has e = k. Let C k be some simple circuit consisting of k vertices and let V k = { v 1,, v k } be the set of vertices and E k be the set of arcs. If C k is a circuit then a path (Table 2.1) consisting of an alternating sequence of vertices and arcs depicted by G k such that G k begins and ends at the same vertex and contains all vertices in V k. All vertices of V k appear in G k exactly once with the exception of the vertex which occurs as the first and last vertex ofg. Since the first and the last vertices of G k are the same vertex in V k therefore there must be exactly k+1 total vertices ing. k k Since adjacent vertex pairs in G k are unique therefore there must be (k+1) 1 unique arcs in G k. If P(k) is true (e = E = k) and since all arcs ing are unique k k then G k must contain every arc of ( v k 1, v k ), v k, ( v k, v 1 ), v 1 then E k. Further if E k = {( v 1, v 2 ), ( v 2, 3 G k = v 1, ( v 1, v 2 ), 2 v ),, ( v 1 k v,, v 1,, v k ), ( v k, v 1 )}. k

152 143 Let P(k+1) be the proposition that every simple circuit with k+1 vertices has q arcs, where q = k+1. Let C k + 1 be some simple circuit comprised of k+1 vertices. If V k + 1 is the vertex set of C k + 1, such that V k + 1 = V k { v k + 1} then V k + 1 = k+1. Let Ek + 1 be the arc set ofc k + 1. It is possible to construct the arc set E k + 1 by removing the arc ( v k, v 1 ) from E k and adding two new arcs ( v k, v k + 1) and ( v k + 1, v 1 ). Thus E k + 1 = [ Ek - ( v k, v 1 )] {( vk, vk + 1),( vk + 1, v1 )}. Thus E k + 1 = Ek - ( v k, v 1 ) + {( vk, vk + 1),( vk + 1, v1 )} = (k 1) + 2 = k + 1 Thus q = E = k + 1. This proves the truth of the proposition P(k+1). If P(k+1) k + 1 is true thus by induction the result must be valid for all simple circuits * *

153 Lemma 4.6.5: Every sub graph induced by a circuit is strongly connected. 144 PROOF: SCC is the maximal set of mutually reachable vertices. Since all vertices of a circuit are mutually reachable therefore every sub graph induced by some circuit must be strongly connected * * Lemma 4.6.6: In any SCC the number of NS Segments must be greater than the number of S Segments. PROOF: By definition, the vertices and arcs of a NS Segment belong to exactly one circuit (vertices or arcs belonging to more than one circuit are shared). The vertices and arcs of S Segment belong to two or more circuits. Thus for an SCC with two or more circuits the number of S Segments must be less than the number of NS Segments * * Safety of m < n = e Two theorems establish safety of the condition m < n = e. The first theorem has Lemmas and associated with it. This theorem establishes logical equivalence that any SCC which has as many arcs as it has vertices must consist of a single circuit

154 only. Thus if the SCC of n vertices and e arcs has this SCC; then (e =n) ( C SCC = 1). 145 C SCC as the set of all simple circuits in Lemma establishes proof of the implication in the reverse direction; ( C SCC = 1) (e =n). Lemma establishes proof of the implication ( C SCC > 1) (e > n), which is used to establish proof of the forward direction of the Theorem 4.6.1; (e =n) ( C SCC = 1). The second theorem establishes safety of SCCs which satisfies the condition m < n = e. Lemma 4.6.7: Any SCC of n vertices and e arcs that is comprised of one circuit only ( C SCC = 1, wherec SCC is the set of simple circuits in the SCC being considered), must have e = n. PROOF: It has been shown that every simple circuit has as many nodes as it has arcs (Lemma 4.6.4). Since it has also been shown that every sub-graph induced by a circuit is also strongly connected (Lemma 4.6.5) therefore it follows that every SCC comprised of a single circuit ( C SCC = 1) must have e = n * * Lemma 4.6.8: Any SCC of n vertices and e arcs that is comprised of two or more circuits ( C SCC > 1, wherec SCC is the set of simple circuits in the SCC being considered) must have e > n.

155 146 PROOF: Lemma established that every SCC which consists of two or more simple circuits must have shared vertices. Discussion and examples in Figure 4.9 depicted an SCC which had both shared vertices as well as shared arcs. For an SCC with C SCC > 1, two possibilities exist: 1. SCC contains some shared vertices but no shared arcs. Case I of this proof establishes that e > n for all SCCs which contain shared vertices but no shared arcs. 2. SCC contains some shared vertices and some share arcs. Case II of this proof establishes that e > n for all SCCs which contain shared vertices and shared arcs. CASE I: Any SCC of n vertices and e arcs that consists of two or more circuits ( C SCC > 1, wherec SCC is the set of simple circuits in the SCC being considered) and has some shared vertices but no shared arcs, must have e > n. PROOF: Let C SCC = k, and be represented as C SCC = { c 1,... c k }. Further let n j and e j represent the number of the vertices and arcs in the j th circuit, c j CSCC. As some vertices are shared; therefore the number of vertices in the SCC must be less than the sum of the vertices in each of the circuits However since none of the arcs are shared therefore e = CSCC k e j j = 1. k n j j = 1. Thus n <.

156 147 It has been shown that every simple circuit has an equal number of arcs and vertices. Therefore e j = n j for every circuit c j C k = n j j = 1 k n j j = 1. However, since n <, therefore e > n. SCC. Substituting we get e CASE II: Any SCC of n vertices and e arcs that consists of two or more circuits ( C SCC > 1) and has both shared vertices and shared arcs, must have e > n. PROOF: Proof is by contradiction of the following statement; some SCC with SCC C > 1 which has e = n. Let SS represent the set of all shared segments in the SCC being examined, such that SS = p. The set may be written as SS = { ss 1,...,ss p }. Similarly, let NS represent the set of all non-shared segments in the SCC being examined, such that NS = q. The set may be written as NS = { ns 1,...,nsq }. Let the i th shared segment have nss vertices and e i ssi arcs. By definition each shared segment is a maximal set of shared vertices and arcs. This implies that the shared segments are disjoint. Thus the total number of shared vertices in the SCC, N ss = p i= 1 n ssi, and the total number of shared arcs in the SCC being examined,

157 E ss = p i= 1 e ssi 148. The number of arcs in any shared segment e ss has been shown to be i > n 1. Consequently the minimum number of arcs in some shared segment, ss i min( e ) = n 1. ssi ss i Thus for the set SS, the minimum number of arcs, min( E SS ) can be no more p than min( e ssi ). Since min( e ss ) = n 1 therefore this may be rewritten as: i= 1 i ss i min( E SS ) p ( n ssi 1), i= 1 p min( E SS ) nss i p i=1 min( E SS ) N SS p, The number of arcs in any non-shared segment ns j (say) has been shown to be min( e ) = n + 1. Thus for the set NS the minimum number of arcs; ns j min( ENS ) ns j can be no more than min( min( ENS ) q q i= 1 ( n ns j + 1), i= 1 q min( ENS ) nns j + q i=1 min( ENS ) N NS + q, e ns j ). This may be rewritten as:

158 149 Arcs can be either shared or non-shared. Since the set of shared arcs and the set of non-shared arcs are mutually disjoint, therefore the minimum number of arcs in a SCC with both shared and non-shared arcs, min(e) min( E SS ) + min( E NS ). This maybe rewritten as: min(e) ( N p) + ( N q), SS NS + min(e) ( N + N ) + ( q p), SS NS Every vertex of the SCC being examined must be either shared or non-shared. Since the sets N SS are disjoint N NS, therefore the total number of vertices in the SCC, n = N + N. SS NS Thus; min(e) n + ( q p). However it has already been shown that q > p, therefore min(e) SCC > n. This contradicts the original statement. Thus any SCC with C > 1, which has shared vertices and shared arcs, has e > n. Case I and Case II establish that if C SCC > 1 then e > n under their respective conditions. Since no other cases exist therefore the propositions of Lemma must be true * *

159 150 Theorem 4.6.1: If an SCC of n vertices and e arcs has C SCC as the set of all simple circuits in this SCC then (e =n) ( C SCC = 1). PROOF: Lemma established that if C SCC = 1 then e =n. This establishes the proof in the reverse direction. Further, Lemma established that if C SCC > 1 then e > n. A consequence of this result is that e n if C SCC > 1. Thus e = n only if C SCC = 1. This establishes the proof in the forward direction. Since proofs have been established in both the forward and the reverse direction therefore the logical equivalence (e =n) ( C SCC = 1) must be valid * * Theorem 4.6.2: Let the SCC under consideration have n vertices, e arcs and m parts occupying vertices of this SCC. If e = n and m < n then there exists an emptying sequence of moves for all parts in the SCC.

160 151 PROOF: It has been established in Theorem that every SCC with e = n must consist of exactly one simple circuit ( C SCC = 1). Since the SCC is a simple circuit therefore all possible part flow along the arcs of this circuit will be unidirectional. Since m < n, therefore this circuit must be less than fully populated. Unidirectional flow in a less than fully populated circuit allows parts to be advanced until parts either exit the SCC or complete all their processing requests. Thus the subset of parts in this condition have an emptying sequence and applying SCC Reduction would remove parts in other SCCs; thus the system can be emptied * *

161 4.7 Free Path Reduction 152 Free Path Reduction, described in Section 3.5, is also used outside and before a call is made to the look ahead procedure. Here, Free Path Reduction is applied in conjunction with SCC Reduction. The steps listed below depict the reduction procedure. 1. remove parts having free paths, repeat step 1 until all parts with free paths are removed. 2. compute strongly connected components 3. apply SCC Reduction 4. if any changes are made during step 3 go to step Circuit Sufficiency Some part move requests will contain part flow deadlocks. The circuit detection procedure described in Section 3.7 is employed outside and before the look ahead procedure. Any state found to have a part flow deadlock is flagged as UNSAFE and further evaluation of this state is not necessary.

162 4.9 Application of Pre-Solve Procedures 153 A part move request is created when a part has finished receiving processing at the resource it is currently assigned to. The part requests transfer to the next resource in its routing. If the next resource is currently not available the part is logically en-queued in a pending requests queue. The request will remain in this queue until another event takes place. Transfer of any part in the system constitutes an event. The requested system state is evaluated and updated by removing any parts with free paths as described in Section 4.7. If none of the parts in the updated requested system state have free paths then a non-bounded directed graph is prepared to reflect the requested system state as described in Section 4.2. The procedure described in Section 4.3 is used to compute the SCCs in the nonbounded graph. The requested system state is updated to reflect only the parts in the destination SCC. The destination SCC is then evaluated with the SCC reduction procedure as described in Section 4.5. If upon SCC reduction any parts are removed from the updated requested system state then this state is re-evaluated for parts with free paths as shown in Figure If SCC reduction is not able to achieve the removal of any parts then the updated requested system state is analyzed using the SCC analysis methods. If

163 154 SCC analysis is able to reach a conclusive decision as described in Section 4.6 then the part move request is either flagged as safe or unsafe (as the case may be). If no conclusive decision is reached then the updated requested system state is examined for part flow deadlocks using the circuit sufficiency procedure (Section 4.8). If part flow deadlock is detected then the part move request is flagged as unsafe and is denied. However if no part flow deadlock is detected then the updated requested system state must be submitted to the look ahead algorithm.

164 Figure 4.10 Flow Chart of the Deadlock Avoidance Methodology 155

165 4.10 Summary 156 This chapter outlined techniques that were developed to limit the amount of computation required to evaluate part move requests. This was achieved in some cases, by satisfying sufficiency conditions, whenever possible, prior to a call to the look ahead procedure. In other cases an attempt was made to reduce the size of the search space. The working of the different pre-solve procedures was described with the help of Figure 4.10.

166 157 Chapter 5 EXPERIMENTS 5.1 Introduction The deadlock avoidance solution proposed in this thesis consists of pre-solve procedures and a tree search algorithm. The pre-solve procedures attempt to reduce the search space and try to make a decision on each part move request being evaluated. If a decision is made based on the safety of the part move request using the pre-solve procedures then no further evaluation is required. If the pre-solve procedures can not make a safe (deadlock free) decision for the part move request then the complete look ahead search is invoked. It is possible to construct a deadlock avoidance algorithm with free path reduction as the only pre-solve procedure. Thus a proposed deadlock avoidance algorithm might use either the full set of pre-solve procedures or use free path reduction as the only presolve procedure. The look ahead search consists of a tree search algorithm which systematically generates and evaluates parts assignments reachable from the parts assignment being

167 158 evaluated. Two different tree search procedures were developed and presented in this thesis. These are the Depth First Search (DFS) and Hybrid Search (HS) which are distinct algorithms developed for separate implementations. It is possible to construct a deadlock avoidance algorithm with either DFS or the HS algorithm for performing the tree search. The performance of each algorithm will depend on the structure of the system as well as the dynamic part congestion within the system. Evaluation of individual states is done by a state evaluation algorithm. Two different state evaluation algorithms were developed in this research. These are: 1) the Circuit Sufficiency (CS) and 2) the FLOW algorithm. Details of these algorithms are presented in Sections 3.6 and 3.7. It is possible to construct a deadlock avoidance algorithm in which the tree search employs either the CS or the FLOW algorithm for state evaluation. Repetitive generation and evaluation of states during tree search may be avoided using the procedure presented in Section It is possible to construct a deadlock avoidance algorithm without incorporating the procedure for repetition avoidance. Such an implementation would allow repetitive generation and evaluation of states during the tree search.

168 159 It should be obvious that a deadlock avoidance algorithm may be constructed from the procedures presented in this thesis with or without the procedure to avoid repetitive generation and evaluation of states during tree search Logical Components of Proposed Solution A deadlock avoidance algorithm can be composed of the following logical components: 1. Pre-Solve Procedure a. Free Path Reduction b. Non-SCC Parts Removal c. Non-SCC Routing Reduction d. SCC Analysis e. Circuit Detection 2. Look Ahead Search a. Tree Search (DFS or HS) b. State Evaluation (CS or FLOW) c. Repetition Avoidance Alternate procedures exist to perform the function of some logical components (for example either DFS or HS may be used for tree search). Table 5.1 lists numerous possible deadlock avoidance algorithms that may be formed from using different

169 alternate algorithms for performing the function of each of the logical components. Each row of Table 5.1 represents a valid deadlock avoidance algorithm. 160 Table 5.1 Recipes of Deadlock Avoidance Algorithms Sr # Tree Search State Evaluation Repetition Pre-Solve DFS HS CS FLOW Allow Avoid All Least

170 161 As an example, the algorithm #1 in Table 5.1 depicts a deadlock avoidance algorithm which employs DFS for tree search, CS for state evaluation and avoids repetitive generations and evaluation of states and includes all pre-solve procedures Experimental Objective The objective of this thesis is to develop and identify an efficient maximally permissive deadlock avoidance algorithms for the class of systems and experimental conditions considered. Given that multiple algorithm combinations have been developed and proposed an experimental setup is required to gain insight into the relative efficacy of the sixteen deadlock avoidance algorithms listed in Table 5.1. The efficacy of the proposed algorithm combinations can be compared using different measures of performance. Two such measures are: 1. Time required by the algorithm to resolve one or more part move requests 2. Average number of states evaluations required to resolve one or more part move requests

171 162 A response variable which measures the time required to resolve one or more part move requests can be expected to vary with the computer hardware and software in which comparative experiments are run. A response variable which measures the average number of states evaluations required to resolve one or more part move requests is independent of the computer hardware and software in which comparative experiments are run. However this response variable cannot be used to compare algorithms which employ different state evaluation methods. FLOW and CS are two different state evaluations methods proposed in this thesis. Evaluation of a state with CS an FLOW could require different amounts of computational efforts. Consequently a response variable measures the average number of states evaluations to resolve one or more part move requests can be used for comparing algorithm combinations which employ the same state evaluation method but should not be used when the algorithm combinations to be compared employ different state evaluation functions.

172 5.1.3 Chapter Outline 163 An experiment was conducted to compare the relative performance of the different deadlock avoidance algorithm combinations presented. A simulation of an FMS was used as the experimental environment. The experimental environment used along with the settings of various system parameters are described in Section 5.2. The experimental design, results and interpretation of the results is presented in Section 5.3. Two performance measures were described in Section The experiment in Section 5.3 employed the time required to evaluate a set of part move requests as the performance measure. In another experiment a simulation was run and the number of states evaluated for each part move request was recorded. The data was collected to gain an understanding of the distribution of the number of states evaluated. This data is analyzed in Section 5.4 and some observations are made.

173 5.2 Experimental Environment 164 A multitude of system parameters may affect the performance measures described in Section Some of these parameters are listed in Table 5.2. Table 5.2 System Parameters Sr. # Parameter 1 Number of machines in the FMS 2 Capacity of machines 3 Buffers in the FMS 4 Arrival Rates of parts 5 Service rates of machines 6 Number of part types 7 System Configuration 8 Deadlock Avoidance Algorithm In order to completely benchmark the algorithms in Table 5.1 across various parameters an exceedingly large factorial experiment would be required. The experiment would require a taxonomy of manufacturing systems. In this thesis these algorithms will be demonstrated on a single system only.

174 165 A single FMS system is simulated using discrete event simulation. The AutoMod simulation software was used. Each of the deadlock avoidance algorithm combinations (listed in Table 5.1) was implemented as a different simulation model. Parameters one through seven, listed in Table 5.2, were fixed in the simulation. The proposed deadlock avoidance algorithm combinations may be used to avoid deadlocks in a particular class of FMS. Section 3.2 lists the set of assumptions made about the class of manufacturing systems considered. Based on the assumptions all machines in the in the simulated FMS have unit capacity and storage buffers are not considered. The FMS modeled belongs to the class of systems referred to as single unit resource allocation system, often abbreviated as SU-RAS. The FMS modeled is comprised on 10 machines. Each machine services parts with service times uniformly distributed between 3 and 5 minutes. Parts arriving for service arrive in a random fashion. Time between arrivals is uniformly distributed between 4.72 and 6.72 minutes. So as to evaluate all kinds of part interactions possible within the selected system configuration it was decided to not to limit the type of parts to a pre-defined set of part types. Instead part routings were randomly generated and route lengths were uniformly distributed between one and the number of machines in the system. The routings themselves were constructed as a sequence of machine visits. Each part arriving at the system had a randomly generated sequence of machine visits such that no parts routing

175 contained two consecutive operations at the same machine. Choices in part routings were not considered. Revisits are allowed in the routings. 166 For the purpose of illustration, Figure 5.1 depicts a screenshot of a simulated parts assignment which is to be evaluated for safety. The FMS depicted comprises of fourteen machines labeled M1, M2 M14. There are ten parts assigned to the machines. Parts are labeled P1, P2 P10. The routings of the ten parts are given in Table 5.3 and all parts are at the first step of their respective routes. Machines are depicted by wireframe cubes and machine fixtures are depicted by smaller wireframe cubes contained within the machine envelope. Each machine has one fixture of unit capacity and fixtures are not separately labeled in this figure. Parts are depicted by small blue colored cubes.

176 167 Figure 5.1 Screenshot of the simulation of an FMS Table 5.3 Part Routings of the FMS illustrated in Figure 5.1 P1 M7 M1 M2 M13 M1 M6 M10 M9 M4 P2 M1 M11 M4 M10 M9 M1 M5 M12 M2 P3 M14 M2 M1 M2 M3 M12 P4 M10 M9 M10 M4 M11 M14 M7 M14 M7 P5 M12 M6 M12 M6 M7 P6 M8 M14 M12 M8 M11 M2 M5 P7 M5 M1 M13 M6 M2 P8 M13 M7 M11 P9 M3 M12 M7 M10 M12 M3 M7 M10 M13 M7 P10 M9 M13 M5 M1

177 5.3 Comparison of Algorithm Combinations 168 As outlined in Section each of the proposed deadlock avoidance algorithm combinations is composed of four logical components. Two options exist for each of these four logical components. The Table 5.3 lists the logical components and the options. Table 5.4 Logical Components and the Options Logical Component Option 1 Option 2 Pre-Solve only Free Path Reduction all pre-solve procedures Tree Search Strategy DFS HS State Evaluation Method FLOW CS Repeat Avoidance allow repetition avoid repetition The objective of the experiments in this section is to determine if one of the options performs better than the other in each of the logical components. If significant differences are found then the option providing the better performance would the recommended option for that logical component under the conditions of the experiment. Section presents the experimental design used to achieve the experimental objective. Analysis of the experimental results is presented in Section and an interpretation of the results is presented in Section

178 5.3.1 Experimental Design 169 A 4 2 factorial experiment is considered. Each of the four logical components, listed in Table 5.4, is considered a factor. The two options available for each of these components are considered as the two levels of the factor corresponding to that component. Two potential performance measures were described in Section Since algorithms with different state evaluation methods are to be compared in this experimental design therefore the time required to resolve one or more part move requests was selected as the performance measure. Each of the deadlock avoidance algorithm combinations (listed in Table 5.1) was implemented as a separate simulation model. For each completed simulation run AutoMod reports the actual time required by the simulation to execute. This is referred to as the simulation time. Simulation time provides an estimate of the time required to resolve all part move requests over the model horizon and was selected as the response variable. The experiment was replicated two times. This required 32 simulation runs. Simulation runs were made in a random order. Each run corresponded to 100 days of simulated operation of the FMS. Replicates of each model were run using non-

179 overlapping sequences of random numbers. The design matrix and the response data obtained from the experiments are shown in Table Table 5.5 Design Matrix and Experimental Data Analysis The analysis was done using the Design-Expert statistical software. Transformation of the data was required in order to satisfy assumptions of normality and equality and variances required for the analysis. The inverse transformation y * = 1/( y + 3) adequately satisfied the assumptions of normality and equality of variances as shown later in this section.

180 171 Table 5.6 presents the effects estimates for this experiment, using the transformed response variable y *. Table 5.6 Effects Estimates Factors B (tree search) and D (repeat avoidance) and the BD interaction account for about 83% of the variability in the transformed response variable. Figure 5.2 presents the normal probability plot of the effects estimates following the * transformation y = 1/( y + 3).

181 172 Figure 5.2 Normal Probability plot of Effects Estimates Based on the Figure 5.2 factors B (tree search) and D (repeat avoidance) along with the BD interaction appear important. The analysis of variance for this model is summarized in Table 5.7.

182 Table 5.7 Analysis of Variance 173 Figures 5.2 and 5.3 present, respectively, a normal probability plot of residuals and a plot of the residuals versus the predicted transformed response variable.

183 174 Figure 5.3 Normal Probability plot of Residuals Figure 5.4 Plot of Residuals versus Predicted Response

184 175 Since the plots of Figure 5.3 and 5.4 are satisfactorily it may be concluded that the * model y = 1/( y + 3) requires only factors B (tree search), D (repeat avoidance) along with BD interaction for adequate interpretation and that the underlying assumptions of the analysis are satisfied Interpretation of Results Figure 5.5 presents a plot of the BD interaction. Figure 5.5 Plot of the BD interaction

185 176 It may be observed from Figure 5.5 that the tree search effect is small when repeat avoidance is at the high level (Avoid) and large when repeat avoidance is at the low level (No Avoid). Thus repeat avoidance should be used with DFS tree search. Repeat avoidance does not have a large effect when used with the HS tree search. Figure 5.6 depicts the predicted simulation time values for the various treatment combinations of the B (tree search) and D (repeat avoidance) factors. Figure 5.6 Plot of Predicted Response

186 177 It may be observed from Figure 5.6 that the best simulation times appear to be obtained when repetition is avoided (high level of factor repeat avoidance) with the DFS tree search strategy. 5.4 Analysis of Number of States Evaluated A 100 day simulation run was made with a model having the algorithm combination with: all pre-solve procedures DFS tree search CS state evaluation Repetition avoidance The number of states evaluated and the status (whether safe or unsafe) of the request were recorded for each part move request. The data was collected primarily to gain an understanding of the distribution of the number of states evaluated. A total of 61,218 requests were evaluated over the course of the simulation run. The algorithm combination was able to resolve 99.7% of the part move requests in less

187 than 100 state evaluations. Figure 5.7 is a histogram of fixed class intervals, depicting the distribution of the states requiring les than 100 state evaluations. 178 Figure 5.7 Histogram of Number of States Evaluated The average number of states evaluated per part move request was The minimum number of states evaluated was 2 and the maximum was 722. The median for the data set was 6 and mode was 2.

188 179 Only 6.6% of all part move requests were unsafe. Figure 5.8 depicts the proportion of unsafe requests using the same fixed class intervals as were used in Figure 5.7. Figure 5.8 Proportion of Unsafe Requestsolved and that unsafe requests form only a small percentage of all requests in the experimental environment used. Further the proportion of unsafe requests is larger for requests requiring 25 or more state evaluations.

CS420: Operating Systems. Deadlocks & Deadlock Prevention

CS420: Operating Systems. Deadlocks & Deadlock Prevention Deadlocks & Deadlock Prevention James Moscola Department of Physical Sciences York College of Pennsylvania Based on Operating System Concepts, 9th Edition by Silberschatz, Galvin, Gagne The Deadlock Problem

More information

System Model. Types of resources Reusable Resources Consumable Resources

System Model. Types of resources Reusable Resources Consumable Resources Deadlocks The Deadlock Problem System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from Deadlock System Model Types

More information

Principles of Operating Systems

Principles of Operating Systems Principles of Operating Systems Lecture 11 - Deadlocks Ardalan Amiri Sani (ardalan@uci.edu) [lecture slides contains some content adapted from previous slides by Prof. Nalini Venkatasubramanian, and course

More information

Deadlock Avoidance For Flexible Manufacturing Systems With Choices Based On Digraph Circuit Analysis

Deadlock Avoidance For Flexible Manufacturing Systems With Choices Based On Digraph Circuit Analysis Deadlock Avoidance For Flexible Manufacturing Systems With Choices Based On Digraph Circuit Analysis Wenle Zhang and Robert P. Judd School of Electrical Engineering and Computer Science Ohio University

More information

Chapter 8: Deadlocks. The Deadlock Problem

Chapter 8: Deadlocks. The Deadlock Problem Chapter 8: Deadlocks System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from Deadlock Combined Approach to Deadlock

More information

Chapter 7: Deadlocks

Chapter 7: Deadlocks Chapter 7: Deadlocks Chapter 7: Deadlocks 7.1 System Model 7.2 Deadlock Characterization 7.3 Methods for Handling Deadlocks 7.4 Deadlock Prevention 7.5 Deadlock Avoidance 7.6 Deadlock Detection 7.7 Recovery

More information

Chapter 7: Deadlocks. Operating System Concepts 9 th Edition

Chapter 7: Deadlocks. Operating System Concepts 9 th Edition Chapter 7: Deadlocks Silberschatz, Galvin and Gagne 2013 Chapter 7: Deadlocks System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection

More information

Deadlock. Chapter Objectives

Deadlock. Chapter Objectives Deadlock This chapter will discuss the following concepts: The Deadlock Problem System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection

More information

Chapter 7: Deadlocks. Operating System Concepts 8 th Edition,

Chapter 7: Deadlocks. Operating System Concepts 8 th Edition, Chapter 7: Deadlocks, Silberschatz, Galvin and Gagne 2009 Chapter 7: Deadlocks The Deadlock Problem System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance

More information

The Deadlock Problem

The Deadlock Problem The Deadlock Problem A set of blocked processes each holding a resource and waiting to acquire a resource held by another process in the set. Example System has 2 disk drives. P 1 and P 2 each hold one

More information

Chapter 7: Deadlocks. Chapter 7: Deadlocks. The Deadlock Problem. Chapter Objectives. System Model. Bridge Crossing Example

Chapter 7: Deadlocks. Chapter 7: Deadlocks. The Deadlock Problem. Chapter Objectives. System Model. Bridge Crossing Example Silberschatz, Galvin and Gagne 2009 Chapter 7: Deadlocks Chapter 7: Deadlocks The Deadlock Problem System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance

More information

UNIT-5 Q1. What is deadlock problem? Explain the system model of deadlock.

UNIT-5 Q1. What is deadlock problem? Explain the system model of deadlock. UNIT-5 Q1. What is deadlock problem? Explain the system model of deadlock. The Deadlock Problem A set of blocked processes each holding a resource and waiting to acquire a resource held by another process

More information

Chapter 7: Deadlocks

Chapter 7: Deadlocks Chapter 7: Deadlocks The Deadlock Problem System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from Deadlock Chapter

More information

OPERATING SYSTEMS. Prescribed Text Book. Operating System Principles, Seventh Edition. Abraham Silberschatz, Peter Baer Galvin and Greg Gagne

OPERATING SYSTEMS. Prescribed Text Book. Operating System Principles, Seventh Edition. Abraham Silberschatz, Peter Baer Galvin and Greg Gagne OPERATING SYSTEMS Prescribed Text Book Operating System Principles, Seventh Edition By Abraham Silberschatz, Peter Baer Galvin and Greg Gagne 1 DEADLOCKS In a multi programming environment, several processes

More information

ICS Principles of Operating Systems. Lectures Set 5- Deadlocks Prof. Nalini Venkatasubramanian

ICS Principles of Operating Systems. Lectures Set 5- Deadlocks Prof. Nalini Venkatasubramanian ICS 143 - Principles of Operating Systems Lectures Set 5- Deadlocks Prof. Nalini Venkatasubramanian nalini@ics.uci.edu Outline System Model Deadlock Characterization Methods for handling deadlocks Deadlock

More information

Chapter 8: Deadlocks

Chapter 8: Deadlocks Chapter 8: Deadlocks System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from Deadlock Combined Approach to Deadlock

More information

Chapter 8: Deadlocks. The Deadlock Problem. System Model. Bridge Crossing Example. Resource-Allocation Graph. Deadlock Characterization

Chapter 8: Deadlocks. The Deadlock Problem. System Model. Bridge Crossing Example. Resource-Allocation Graph. Deadlock Characterization Chapter 8: Deadlocks The Deadlock Problem System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from Deadlock Combined

More information

Deadlock Prevention. Restrain the ways request can be made. Mutual Exclusion not required for sharable resources; must hold for nonsharable resources.

Deadlock Prevention. Restrain the ways request can be made. Mutual Exclusion not required for sharable resources; must hold for nonsharable resources. Deadlock Prevention Restrain the ways request can be made. Mutual Exclusion not required for sharable resources; must hold for nonsharable resources. Hold and Wait must guarantee that whenever a process

More information

Chapter 7: Deadlocks

Chapter 7: Deadlocks Chapter 7: Deadlocks Chapter 7: Deadlocks The Deadlock Problem System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from

More information

Chapter 7 : 7: Deadlocks Silberschatz, Galvin and Gagne 2009 Operating System Concepts 8th Edition, Chapter 7: Deadlocks

Chapter 7 : 7: Deadlocks Silberschatz, Galvin and Gagne 2009 Operating System Concepts 8th Edition, Chapter 7: Deadlocks Chapter 7: Deadlocks, Silberschatz, Galvin and Gagne 2009 Chapter 7: Deadlocks The Deadlock Problem System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance

More information

Module 7: Deadlocks. The Deadlock Problem. Bridge Crossing Example. System Model

Module 7: Deadlocks. The Deadlock Problem. Bridge Crossing Example. System Model Module 7: Deadlocks The Deadlock Problem System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from Deadlock Combined

More information

Principles of Operating Systems

Principles of Operating Systems Principles of Operating Systems Lecture 16-17 - Deadlocks Ardalan Amiri Sani (ardalan@uci.edu) [lecture slides contains some content adapted from previous slides by Prof. Nalini Venkatasubramanian, and

More information

Chapter 7: Deadlocks. Operating System Concepts 8th Edition, modified by Stewart Weiss

Chapter 7: Deadlocks. Operating System Concepts 8th Edition, modified by Stewart Weiss Chapter 7: Deadlocks, Chapter 7: Deadlocks The Deadlock Problem System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance (briefly) Deadlock Detection

More information

Chapter 7: Deadlocks. Operating System Concepts with Java 8 th Edition

Chapter 7: Deadlocks. Operating System Concepts with Java 8 th Edition Chapter 7: Deadlocks 7.1 Silberschatz, Galvin and Gagne 2009 Chapter 7: Deadlocks The Deadlock Problem System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock

More information

Chapter 8: Deadlocks. Operating System Concepts with Java

Chapter 8: Deadlocks. Operating System Concepts with Java Chapter 8: Deadlocks System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from Deadlock Combined Approach to Deadlock

More information

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy Operating Systems Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. AL-AZHAR University Website : eaymanelshenawy.wordpress.com Email : eaymanelshenawy@yahoo.com Reference

More information

Deadlocks. Operating System Concepts - 7 th Edition, Feb 14, 2005

Deadlocks. Operating System Concepts - 7 th Edition, Feb 14, 2005 Deadlocks Deadlocks The Deadlock Problem System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from Deadlock 7.2 Silberschatz,

More information

CMSC 412. Announcements

CMSC 412. Announcements CMSC 412 Deadlock Reading Announcements Chapter 7 Midterm next Monday In class Will have a review on Wednesday Project 3 due Friday Project 4 will be posted the same day 1 1 The Deadlock Problem A set

More information

Deadlocks. Deadlock Overview

Deadlocks. Deadlock Overview Deadlocks Gordon College Stephen Brinton Deadlock Overview The Deadlock Problem System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection

More information

CS307 Operating Systems Deadlocks

CS307 Operating Systems Deadlocks CS307 Deadlocks Fan Wu Department of Computer Science and Engineering Shanghai Jiao Tong University Spring 2016 Bridge Crossing Example Traffic only in one direction Each section of a bridge can be viewed

More information

Deadlocks. Bridge Crossing Example. The Problem of Deadlock. Deadlock Characterization. Resource-Allocation Graph. System Model

Deadlocks. Bridge Crossing Example. The Problem of Deadlock. Deadlock Characterization. Resource-Allocation Graph. System Model CS07 Bridge Crossing Example Deadlocks Fan Wu Department of Computer Science and Engineering Shanghai Jiao Tong University Spring 2016 Traffic only in one direction Each section of a bridge can be viewed

More information

1344 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 42, NO. 10, OCTOBER 1997

1344 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 42, NO. 10, OCTOBER 1997 1344 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 42, NO. 10, OCTOBER 1997 Polynomial-Complexity Deadlock Avoidance Policies for Sequential Resource Allocation Systems Spiridon A. Reveliotis, Member, IEEE,

More information

Lecture 7 Deadlocks (chapter 7)

Lecture 7 Deadlocks (chapter 7) Bilkent University Department of Computer Engineering CS342 Operating Systems Lecture 7 Deadlocks (chapter 7) Dr. İbrahim Körpeoğlu http://www.cs.bilkent.edu.tr/~korpe 1 References The slides here are

More information

Deadlocks. Prepared By: Kaushik Vaghani

Deadlocks. Prepared By: Kaushik Vaghani Deadlocks Prepared By : Kaushik Vaghani Outline System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection & Recovery The Deadlock Problem

More information

The Deadlock Problem (1)

The Deadlock Problem (1) Deadlocks The Deadlock Problem (1) A set of blocked processes each holding a resource and waiting to acquire a resource held by another process in the set. Example System has 2 disk drives. P 1 and P 2

More information

Chapter 7: Deadlocks. Operating System Concepts 9th Edition DM510-14

Chapter 7: Deadlocks. Operating System Concepts 9th Edition DM510-14 Chapter 7: Deadlocks Chapter 7: Deadlocks System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from Deadlock 7.2 Chapter

More information

Chapter 7: Deadlocks

Chapter 7: Deadlocks Chapter 7: Deadlocks System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from Deadlock Combined Approach to Deadlock

More information

Module 7: Deadlocks. The Deadlock Problem

Module 7: Deadlocks. The Deadlock Problem Module 7: Deadlocks System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from Deadlock Combined Approach to Deadlock

More information

Module 7: Deadlocks. System Model. Deadlock Characterization. Methods for Handling Deadlocks. Deadlock Prevention. Deadlock Avoidance

Module 7: Deadlocks. System Model. Deadlock Characterization. Methods for Handling Deadlocks. Deadlock Prevention. Deadlock Avoidance Module 7: Deadlocks System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from Deadlock Combined Approach to Deadlock

More information

Unit-03 Deadlock and Memory Management Unit-03/Lecture-01

Unit-03 Deadlock and Memory Management Unit-03/Lecture-01 1 Unit-03 Deadlock and Memory Management Unit-03/Lecture-01 The Deadlock Problem 1. A set of blocked processes each holding a resource and waiting to acquire a resource held by another process in the set.

More information

Silberschatz, Galvin and Gagne 2013! CPU cycles, memory space, I/O devices! " Silberschatz, Galvin and Gagne 2013!

Silberschatz, Galvin and Gagne 2013! CPU cycles, memory space, I/O devices!  Silberschatz, Galvin and Gagne 2013! Chapter 7: Deadlocks Chapter 7: Deadlocks System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from Deadlock 7.2 Chapter

More information

Contents. Chapter 8 Deadlocks

Contents. Chapter 8 Deadlocks Contents * All rights reserved, Tei-Wei Kuo, National Taiwan University,.. Introduction. Computer-System Structures. Operating-System Structures 4. Processes 5. Threads 6. CPU Scheduling 7. Process Synchronization

More information

The Deadlock Problem

The Deadlock Problem Chapter 7: Deadlocks The Deadlock Problem System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from Deadlock The Deadlock

More information

Deadlock Risk Management

Deadlock Risk Management Lecture 5: Deadlocks, Deadlock Risk Management Contents The Concept of Deadlock Resource Allocation Graph Approaches to Handling Deadlocks Deadlock Avoidance Deadlock Detection Recovery from Deadlock AE3B33OSD

More information

Deadlock. Concepts to discuss. A System Model. Deadlock Characterization. Deadlock: Dining-Philosophers Example. Deadlock: Bridge Crossing Example

Deadlock. Concepts to discuss. A System Model. Deadlock Characterization. Deadlock: Dining-Philosophers Example. Deadlock: Bridge Crossing Example Concepts to discuss Deadlock CSCI 315 Operating Systems Design Department of Computer Science Deadlock Livelock Spinlock vs. Blocking Notice: The slides for this lecture have been largely based on those

More information

Introduction to Deadlocks

Introduction to Deadlocks Unit 5 Introduction to Deadlocks Structure 5.1 Introduction Objectives 5.2 System Model 5.3 Deadlock Characterization Necessary Conditions for Deadlock Resource-Allocation Graph. 5.4 Deadlock Handling

More information

Module 11. Directed Graphs. Contents

Module 11. Directed Graphs. Contents Module 11 Directed Graphs Contents 11.1 Basic concepts......................... 256 Underlying graph of a digraph................ 257 Out-degrees and in-degrees.................. 258 Isomorphism..........................

More information

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem? What is the Race Condition? And what is its solution? Race Condition: Where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular

More information

Chapter 7: Deadlocks. Operating System Concepts 8 th Edition,

Chapter 7: Deadlocks. Operating System Concepts 8 th Edition, Chapter 7: Deadlocks, Silberschatz, Galvin and Gagne 2009 Chapter Objectives To develop a description of deadlocks, which prevent sets of concurrent processes from completing their tasks To present a number

More information

Chapter 7: Deadlocks. Operating System Concepts 9 th Edition

Chapter 7: Deadlocks. Operating System Concepts 9 th Edition Chapter 7: Deadlocks Silberschatz, Galvin and Gagne 2013 Chapter 7: Deadlocks System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection

More information

Chapter 7: Deadlocks. Operating System Concepts 8 th Edition,! Silberschatz, Galvin and Gagne 2009!

Chapter 7: Deadlocks. Operating System Concepts 8 th Edition,! Silberschatz, Galvin and Gagne 2009! Chapter 7: Deadlocks Operating System Concepts 8 th Edition,! Silberschatz, Galvin and Gagne 2009! Chapter 7: Deadlocks The Deadlock Problem System Model Deadlock Characterization Methods for Handling

More information

What is Deadlock? Two or more entities need a resource to make progress, but will never get that resource. Examples from everyday life:

What is Deadlock? Two or more entities need a resource to make progress, but will never get that resource. Examples from everyday life: Chapter 10 Deadlock What is Deadlock? Two or more entities need a resource to make progress, but will never get that resource Examples from everyday life: Gridlock of cars in a city Class scheduling: Two

More information

OPERATING SYSTEMS. Deadlocks

OPERATING SYSTEMS. Deadlocks OPERATING SYSTEMS CS3502 Spring 2018 Deadlocks Chapter 7 Resource Allocation and Deallocation When a process needs resources, it will normally follow the sequence: 1. Request a number of instances of one

More information

Systemic Solutions to Deadlock in FMS

Systemic Solutions to Deadlock in FMS Systemic Solutions to Deadlock in FMS Xu gang, Wu zhi Ming Abstract In order to solve deadlock in FMS, an integrated design method for FMS is presented. This method is based on deadlock free scheduling,

More information

UNIT 4 DEADLOCKS 4.0 INTRODUCTION

UNIT 4 DEADLOCKS 4.0 INTRODUCTION UNIT 4 DEADLOCKS Deadlocks Structure Page Nos 4.0 Introduction 69 4.1 Objectives 70 4.2 Deadlocks 70 4.3 Characterisation of a Deadlock 71 4.3.1 Mutual Exclusion Condition 4.3.2 Hold and Wait Condition

More information

EXTENDING THE PRIORITY CEILING PROTOCOL USING READ/WRITE AFFECTED SETS MICHAEL A. SQUADRITO A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE

EXTENDING THE PRIORITY CEILING PROTOCOL USING READ/WRITE AFFECTED SETS MICHAEL A. SQUADRITO A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE EXTENDING THE PRIORITY CEILING PROTOCOL USING READ/WRITE AFFECTED SETS BY MICHAEL A. SQUADRITO A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE IN COMPUTER

More information

CS307: Operating Systems

CS307: Operating Systems CS307: Operating Systems Chentao Wu 吴晨涛 Associate Professor Dept. of Computer Science and Engineering Shanghai Jiao Tong University SEIEE Building 3-513 wuct@cs.sjtu.edu.cn Download Lectures ftp://public.sjtu.edu.cn

More information

COMP 3713 Operating Systems Slides Part 3. Jim Diamond CAR 409 Jodrey School of Computer Science Acadia University

COMP 3713 Operating Systems Slides Part 3. Jim Diamond CAR 409 Jodrey School of Computer Science Acadia University COMP 3713 Operating Systems Slides Part 3 Jim Diamond CAR 409 Jodrey School of Computer Science Acadia University Acknowledgements These slides borrow from those prepared for Operating System Concepts

More information

The Deadlock Problem

The Deadlock Problem Deadlocks The Deadlock Problem A set of blocked processes each holding a resource and waiting to acquire a resource held by another process in the set. Example System has 2 disk drives. P1 and P2 each

More information

Deadlocks. Mehdi Kargahi School of ECE University of Tehran Spring 2008

Deadlocks. Mehdi Kargahi School of ECE University of Tehran Spring 2008 Deadlocks Mehdi Kargahi School of ECE University of Tehran Spring 2008 What is a Deadlock Processes use resources in the following sequence: Request Use Release A number of processes may participate in

More information

DEADLOCK AVOIDANCE FOR FLEXIBLE MANUFACTURING SYSTEMS WITH CHOICES BASED ON DIGRAPH CIRCUIT ANALYSIS

DEADLOCK AVOIDANCE FOR FLEXIBLE MANUFACTURING SYSTEMS WITH CHOICES BASED ON DIGRAPH CIRCUIT ANALYSIS Asian Journal of Control, Vol. 9, No. 2, pp. 111-120, June 2007 111 DEADLOCK AVOIDANCE FOR FLEXIBLE MANUFACTURING SYSTEMS WITH CHOICES BASED ON DIGRAPH CIRCUIT ANALYSIS Wenle Zhang and Robert P. Judd ABSTRACT

More information

Operating Systems. Deadlocks. Stephan Sigg. November 30, Distributed and Ubiquitous Systems Technische Universität Braunschweig

Operating Systems. Deadlocks. Stephan Sigg. November 30, Distributed and Ubiquitous Systems Technische Universität Braunschweig Operating Systems Deadlocks Stephan Sigg Distributed and Ubiquitous Systems Technische Universität Braunschweig November 30, 2010 Stephan Sigg Operating Systems 1/86 Overview and Structure Introduction

More information

CSE Opera+ng System Principles

CSE Opera+ng System Principles CSE 30341 Opera+ng System Principles Deadlocks Overview System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from Deadlock

More information

Petri Nets ee249 Fall 2000

Petri Nets ee249 Fall 2000 Petri Nets ee249 Fall 2000 Marco Sgroi Most slides borrowed from Luciano Lavagno s lecture ee249 (1998) 1 Models Of Computation for reactive systems Main MOCs: Communicating Finite State Machines Dataflow

More information

Chapter - 4. Deadlocks Important Questions

Chapter - 4. Deadlocks Important Questions Chapter - 4 Deadlocks Important Questions 1 1.What do you mean by Deadlocks? A process request for some resources. If the resources are not available at that time, the process enters a waiting state. The

More information

Module 6: Deadlocks. Reading: Chapter 7

Module 6: Deadlocks. Reading: Chapter 7 Module 6: Deadlocks Reading: Chapter 7 Objective: To develop a description of deadlocks, which prevent sets of concurrent processes from completing their tasks To present a number of different methods

More information

COP 4610: Introduction to Operating Systems (Spring 2016) Chapter 7 Deadlocks. Zhi Wang Florida State University

COP 4610: Introduction to Operating Systems (Spring 2016) Chapter 7 Deadlocks. Zhi Wang Florida State University COP 4610: Introduction to Operating Systems (Spring 2016) Chapter 7 Deadlocks Zhi Wang Florida State University Contents Deadlock problem System model Handling deadlocks deadlock prevention deadlock avoidance

More information

University of Babylon / College of Information Technology / Network Department. Operating System / Dr. Mahdi S. Almhanna & Dr. Rafah M.

University of Babylon / College of Information Technology / Network Department. Operating System / Dr. Mahdi S. Almhanna & Dr. Rafah M. Chapter 6 Methods for Handling Deadlocks Generally speaking, we can deal with the deadlock problem in one of three ways: We can use a protocol to prevent or avoid deadlocks, ensuring that the system will

More information

Chapter 7: Deadlocks. Operating System Concepts 9 th Edition! Silberschatz, Galvin and Gagne 2013!

Chapter 7: Deadlocks. Operating System Concepts 9 th Edition! Silberschatz, Galvin and Gagne 2013! Chapter 7: Deadlocks Silberschatz, Galvin and Gagne 2013! Chapter 7: Deadlocks System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection

More information

Chapter 7: Deadlocks. Operating System Concepts 9 th Edition

Chapter 7: Deadlocks. Operating System Concepts 9 th Edition Chapter 7: Deadlocks Silberschatz, Galvin and Gagne 2013 Chapter 7: Deadlocks System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection

More information

Deadlocks. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Deadlocks. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University Deadlocks Jinkyu Jeong (jinkyu@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu Today s Topics What is deadlock? Deadlock characterization Four conditions for deadlock

More information

Roadmap. Deadlock Prevention. Deadlock Prevention (Cont.) Deadlock Detection. Exercise. Tevfik Koşar. CSE 421/521 - Operating Systems Fall 2012

Roadmap. Deadlock Prevention. Deadlock Prevention (Cont.) Deadlock Detection. Exercise. Tevfik Koşar. CSE 421/521 - Operating Systems Fall 2012 CSE 421/521 - Operating Systems Fall 2012 Roadmap Lecture - XI Deadlocks - II Deadlocks Deadlock Prevention Deadlock Detection Deadlock Recovery Deadlock Avoidance Tevfik Koşar University at Buffalo October

More information

Chapter 8: Deadlocks. The Deadlock Problem

Chapter 8: Deadlocks. The Deadlock Problem Chapter 8: Deadlocks System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from Deadlock Combined Approach to Deadlock

More information

The Deadlock Problem. Chapter 8: Deadlocks. Bridge Crossing Example. System Model. Deadlock Characterization. Resource-Allocation Graph

The Deadlock Problem. Chapter 8: Deadlocks. Bridge Crossing Example. System Model. Deadlock Characterization. Resource-Allocation Graph Chapter 8: Deadlocks The Deadlock Problem System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from Deadlock Combined

More information

Chapter 7: Deadlocks. Operating System Concepts 9 th Edition

Chapter 7: Deadlocks. Operating System Concepts 9 th Edition Chapter 7: Deadlocks Silberschatz, Galvin and Gagne 2013 Chapter 7: Deadlocks System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection

More information

Synthesis of An Event Based Supervisor For Deadlock Avoidance In Semiconductor Manufacturing Systems

Synthesis of An Event Based Supervisor For Deadlock Avoidance In Semiconductor Manufacturing Systems Synthesis of An Event Based Supervisor For Deadlock Avoidance In Semiconductor Manufacturing Systems Wenle Zhang School of Electrical Engineering and Computer Science Ohio University Athens, Ohio 45701

More information

Process-1 requests the tape unit, waits. In this chapter, we shall analyze deadlocks with the following assumptions:

Process-1 requests the tape unit, waits. In this chapter, we shall analyze deadlocks with the following assumptions: Chapter 5 Deadlocks 5.1 Definition In a multiprogramming system, processes request resources. If those resources are being used by other processes then the process enters a waiting state. However, if other

More information

Deadlocks. Dr. Yingwu Zhu

Deadlocks. Dr. Yingwu Zhu Deadlocks Dr. Yingwu Zhu Deadlocks Synchronization is a live gun we can easily shoot ourselves in the foot Incorrect use of synchronization can block all processes You have likely been intuitively avoiding

More information

Deadlocks. Minsoo Ryu. Real-Time Computing and Communications Lab. Hanyang University.

Deadlocks. Minsoo Ryu. Real-Time Computing and Communications Lab. Hanyang University. Deadlocks Minsoo Ryu Real-Time Computing and Communications Lab. Hanyang University msryu@hanyang.ac.kr Topics Covered System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention

More information

Bridge Crossing Example

Bridge Crossing Example CSCI 4401 Principles of Operating Systems I Deadlocks Vassil Roussev vassil@cs.uno.edu Bridge Crossing Example 2 Traffic only in one direction. Each section of a bridge can be viewed as a resource. If

More information

UNIT-3 DEADLOCKS DEADLOCKS

UNIT-3 DEADLOCKS DEADLOCKS UNIT-3 DEADLOCKS Deadlocks: System Model - Deadlock Characterization - Methods for Handling Deadlocks - Deadlock Prevention. Deadlock Avoidance - Deadlock Detection - Recovery from Deadlock DEADLOCKS Definition:

More information

CHAPTER 7: DEADLOCKS. By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

CHAPTER 7: DEADLOCKS. By I-Chen Lin Textbook: Operating System Concepts 9th Ed. CHAPTER 7: DEADLOCKS By I-Chen Lin Textbook: Operating System Concepts 9th Ed. Chapter 7: Deadlocks The Deadlock Problem System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention

More information

A comparison of two new exact algorithms for the robust shortest path problem

A comparison of two new exact algorithms for the robust shortest path problem TRISTAN V: The Fifth Triennal Symposium on Transportation Analysis 1 A comparison of two new exact algorithms for the robust shortest path problem Roberto Montemanni Luca Maria Gambardella Alberto Donati

More information

CISC 7310X. C10: Deadlocks. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 4/12/2018 CUNY Brooklyn College

CISC 7310X. C10: Deadlocks. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 4/12/2018 CUNY Brooklyn College CISC 7310X C10: Deadlocks Hui Chen Department of Computer & Information Science CUNY Brooklyn College 4/12/2018 CUNY Brooklyn College 1 Outline Concept of deadlock Necessary conditions Models of deadlocks

More information

ECE519 Advanced Operating Systems

ECE519 Advanced Operating Systems IT 540 Operating Systems ECE519 Advanced Operating Systems Prof. Dr. Hasan Hüseyin BALIK (6 th Week) (Advanced) Operating Systems 6. Concurrency: Deadlock and Starvation 6. Outline Principles of Deadlock

More information

Deadlock. Rab Nawaz Jadoon DCS. Assistant Professor COMSATS Lahore Pakistan. Department of Computer Science

Deadlock. Rab Nawaz Jadoon DCS. Assistant Professor COMSATS Lahore Pakistan. Department of Computer Science Deadlock Rab Nawaz Jadoon DCS COMSATS Institute of Information Technology Assistant Professor COMSATS Lahore Pakistan Operating System Concepts Deadlock A process in a multiprogramming system is said to

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Chapter seven: Deadlock and Postponement

Chapter seven: Deadlock and Postponement Chapter seven: Deadlock and Postponement -One problem that arises in multiprogrammed systems is deadlock. A process or thread is in a state of deadlock if it is waiting for a particular event that will

More information

VI. Deadlocks. What is a Deadlock? Intended Schedule. Deadlock Problem (1) ???

VI. Deadlocks. What is a Deadlock? Intended Schedule. Deadlock Problem (1) ??? Intended Schedule VI. Deadlocks Date Lecture Hand out Submission 0 20.04. Introduction to Operating Systems Course registration 1 27.04. Systems Programming using C (File Subsystem) 1. Assignment 2 04.05.

More information

VI. Deadlocks Operating Systems Prof. Dr. Marc H. Scholl DBIS U KN Summer Term

VI. Deadlocks Operating Systems Prof. Dr. Marc H. Scholl DBIS U KN Summer Term VI. Deadlocks 1 Intended Schedule Date Lecture Hand out Submission 0 20.04. Introduction to Operating Systems Course registration 1 27.04. Systems Programming using C (File Subsystem) 1. Assignment 2 04.05.

More information

Fig Bridge crossing - deadlock

Fig Bridge crossing - deadlock e-pg Pathshala Subject: Computer Science Paper: Operating Systems Module 16: Deadlocks Introduction Module No: CS/OS/16 Quadrant 1 e-text 16.1 Introduction Any system has many processes and a number of

More information

Deadlocks. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Deadlocks. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University Deadlocks Jin-Soo Kim (jinsookim@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu Today s Topics What is the deadlock problem? Four conditions for deadlock Handling deadlock

More information

Deadlock Revisited. CS439: Principles of Computer Systems November 29, 2017

Deadlock Revisited. CS439: Principles of Computer Systems November 29, 2017 Deadlock Revisited CS439: Principles of Computer Systems November 29, 2017 Last Time Distributed File Systems Remote Procedure Calls (RPC) Consistency Models Coherence, Staleness, Consistency Network File

More information

CHAPTER 7 - DEADLOCKS

CHAPTER 7 - DEADLOCKS CHAPTER 7 - DEADLOCKS 1 OBJECTIVES To develop a description of deadlocks, which prevent sets of concurrent processes from completing their tasks To present a number of different methods for preventing

More information

Job-shop scheduling with limited capacity buffers

Job-shop scheduling with limited capacity buffers Job-shop scheduling with limited capacity buffers Peter Brucker, Silvia Heitmann University of Osnabrück, Department of Mathematics/Informatics Albrechtstr. 28, D-49069 Osnabrück, Germany {peter,sheitman}@mathematik.uni-osnabrueck.de

More information

Operating Systems. Deadlock. Lecture 7 Michael O Boyle

Operating Systems. Deadlock. Lecture 7 Michael O Boyle Operating Systems Deadlock Lecture 7 Michael O Boyle 1 2 Definition A thread is deadlocked when it s waiting for an event that can never occur I m waiting for you to clear the intersection, so I can proceed

More information

SOLVING DEADLOCK STATES IN MODEL OF RAILWAY STATION OPERATION USING COLOURED PETRI NETS

SOLVING DEADLOCK STATES IN MODEL OF RAILWAY STATION OPERATION USING COLOURED PETRI NETS SOLVING DEADLOCK STATES IN MODEL OF RAILWAY STATION OPERATION USING COLOURED PETRI NETS Michal Žarnay University of Žilina, Faculty of Management Science and Informatics, Address: Univerzitná 8215/1, Žilina,

More information

Deadlocks. System Model

Deadlocks. System Model Deadlocks System Model Several processes competing for resources. A process may wait for resources. If another waiting process holds resources, possible deadlock. NB: this is a process-coordination problem

More information

Resource Sharing & Management

Resource Sharing & Management Resource Sharing & Management P.C.P Bhatt P.C.P Bhatt OS/M6/V1/2004 1 Introduction Some of the resources connected to a computer system (image processing resource) may be expensive. These resources may

More information

Resource Management and Deadlocks 1

Resource Management and Deadlocks 1 Resource Management and Deadlocks 1 The Deadlock Problem Law passed by the Kansas Legislature in early 20th century: When two trains approach each other at a crossing, both shall come to a full stop and

More information