ON THE PERFORMANCE OF HOPFIELD NETWORK FOR GRAPH SEARCH PROBLEM

Size: px
Start display at page:

Download "ON THE PERFORMANCE OF HOPFIELD NETWORK FOR GRAPH SEARCH PROBLEM"

Transcription

1 ON THE PERFORMANCE OF HOPFIELD NETWORK FOR GRAPH SEARCH PROBLEM Gursel Serpen and Azadeh Parvin phone: (49) fax: (49) The University of Toledo Electrical Engineering & Computer Science Department Toledo, OH USA

2 ABSTRACT This paper presents a study on the performance of the Hopfield neural network algorithm for the graph path search problem. Specifically, performance of the Hopfield network is studied from the dynamic systems stability perspective. Simulations of the time behavior of the neural network is augmented with exhaustive stability analysis of the equilibrium points of the network dynamics. The goal is to understand the reasons for the well-known deficiency of the Hopfield network algorithm: the inability to scale up with the problem size. A recent procedure, which establishes solutions as stable equilibrium points in the state space of the network dynamics, is employed to define the constraint weight parameters of the Hopfield neural network. Simulation study of the network and stability analysis of equilibrium points indicate that a large set of nonsolution equilibrium points also become stable whenever constraint weight parameters are set to make the solution equilibrium points stable. As the problem size grows, the number of stable non-solution equilibrium points increases at a much faster rate than the number of stable solution equilibrium points: the network becomes more likely to converge to a non-solution equilibrium point. Keywords - neural networks, Hopfield networks, weight parameters, stability analysis, dynamic system, optimization, constraint satisfaction

3 3. Introduction Hopfield networks have been employed as fixed-point attractors to solve a large set of constraint satisfaction and optimization problems [3, 4, 5, 6, 7]. Time-behavior of the network can be succinctly expressed as performing a gradient descent search in the Lyapunov function space, providing a measure for the degree of satisfying the set of constraints associated with a given problem [6]. Main promise of the Hopfield network is to converge to the stable fixed-point located within the basin of attraction implied by the initial conditions of the network dynamics. The Hopfield network can successfully address certain constraint satisfaction or optimization problems for which a local optimum solution is acceptable [,, 5]. A Boltzmann machine, a stochastic version of the Hopfield neural network algorithm, is employed if the global optimum solution is desired []. When the Hopfield network is employed in a fixed-point attractor mode, the neural network design task incorporates establishing the solutions of a given optimization problem as local minimum points or equivalently stable points in the Lyapunov space or state space, respectively. This task has been performed in an ad hoc manner until recently since the original Hopfield algorithm did not propose a way to define the constraint weight parameters [,, 3, 4, 7, 8, 9,, 3, 4]. In most cases, constraint weight parameters were set using empirical guidelines. As a result, stability properties of solution equilibrium points could not be determined. Recently, Serpen [9] and Abe [] have devised procedures, employing two different approaches, to set the constraint weight parameters to establish solutions as stable equilibrium points in the state space of the problem dynamics. Most studies in the literature consider the Traveling Salesman Problem (TSP), which is NPcomplete, as the benchmark problem for performance analysis of the Hopfield network algorithm. In an earlier work [8], it was shown that the Hopfield network converged to a solution of the TSP after each relaxation when the constraint weight parameters were set in accordance with the procedure specified as in Serpen [9]. However, the solution quality was, at best, locally optimum. A state space analysis indicated that all solutions were stable and no other equilibrium point was stable. In summary, the Hopfield network always located a solution for the TSP but the quality of solutions, in terms of the overall travel distance, was average. On the other hand, initial work with the path search problem in graphs indicated that the

4 4 Hopfield network with the constraint weight parameters defined as in Serpen [9] often failed to converge to a solution. The graph path search problem (GPSP) proved to be "hard" for the Hopfield network paradigm. The goal of this paper is to investigate the reasons for the poor performance of the Hopfield network on the GPSP. Towards that goal, an extensive set of simulation and mathematical stability analysis will be performed. Specifically, stability of solution points, the set of stable equilibrium points, and convergence properties of the Hopfield network algorithm will be studied. The structure of the paper is as follows. The discrete Hopfield network will be presented in Section. Definitions of GPSP, network topology, and energy function and bounds on constraint weight parameters are given in Section. Simulation study and mathematical stability analysis are presented in Section 3. Conclusions will be presented in Section 4.. List of Symbols Presented below is a list of symbols employed in the remainder of the paper. s i w ij b i u i θ i E net i C ϕ output of network node i weight between nodes i and j external bias term for node i state of node i threshold of node i energy or Liapunov function network input for node i constraint ϕ g ϕ weight parameter for C ϕ δ ϕ ij d ij ϕ function representing interaction between nodes i and j under C ϕ function representing the cost of interaction between nodes i and j under C ϕ

5 5. Definitions A list of definitions which will be employed in the remainder of the paper is presented below. Definition. The state space set contains all N N-bit binary vectors for an N-node network. Definition. The stable point set includes the binary vectors which are stable points of the Hopfield network dynamics for a given problem. Definition 3. The solution set contains those N-bit binary vectors for an N-node network which are solutions of a problem. Definition 4. The stable solution set consists of those solutions of a given problem which are stable points of the network dynamics. Definition 5. A relaxation (iteration) of the Hopfield network is the total computation effort for the network to start from an initial state and to converge to a final state. Definition 6. The convergence rate is computed by the number of convergences to solutions divided by the number of total relaxations attempted. Definition 7. The convergence ratio is calculated by dividing the number of elements in the stable solution set to the number of elements in the stable point set. Definition 8. An operating point is an instance of the values for the set of constraint weight parameters. Definition 9. For an N N array network topology, row(i) and col(i) are equal to the row and column indices of the node si with i =,,..., N, respectively. Definition 0. A constraint is called hard if violating it necessarily prevents the network from finding a solution. Definition. A soft constraint is employed to map a cost measure associated with the quality of a solution as typically found in optimization problems..3 The Discrete Hopfield Network The discrete Hopfield network [8, 9, 0, ] is a nonlinear dynamic system with the following formal definition. Let s i represent a node output where s i = 0, for i =,,, N and N is the number of network nodes. Then, the equation given by

6 6 N N E = wijsis j bisi + Θ isi, i j () i= j= i= i= is the Liapunov function whose local minima are the final states of the network with node dynamics defined by s N i k i k i s N + = 0 if net < Θ, i k i k i + = if net > Θ, and i k i k i k i s + = s if net = Θ, i =,,, N, () where k is a discrete time index, N neti k = wijs k j + bi (3) j= with i j and Θ i is the threshold of node s i. The weight term is defined by Z ij = ϕ ϕ ϕ ij ij ϕ= w g δ d ϕ, g R + if the where Z is the number of constraints. Given the set of constraints C { C, C,, } hypotheses nodes s i C Z and s represent for C ϕ are mutually supporting and g R if the same hypotheses j ϕ ϕ are mutually conflicting. The term δ ϕ ij is equal to if the two hypotheses represented by nodes s s i and j are related under C ϕ and is equal to 0 otherwise. The d ij ϕ term is equal to for all i and j under a hard constraint and is a predefined cost for a soft constraint, which is typically associated with a cost term in optimization problems.

7 7. Graph Path Search Problem A graph with directed edges is used as a mathematical model for a number of real-life problems [5]; example problems include the path planning in a task state space, the nonlinear multi-commodity flow problem and the routing problem in computer networks. Determining a solution to those problems often requires computation of the shortest path between two vertices of a graph: the source vertex and the target vertex. A directed graph is a set of m vertices, Vi, i =,,, m, and k ( = m ) directed edges, eij, i, j =,,..., m, where some of the edges may not exist. Each directed edge in the graph has an associated length or cost. The length of an edge is represented with a real number. A path between a source vertex, V s, and a target vertex, V t, is an ordered sequence of non-zero length edges. The path length is given by the algebraic sum of lengths of all the edges in that path. A solution for the GPSP is then to identify the path which has the minimum length. For the case where all edges have the same length, the shortest path is equivalent to the minimum number of edges which connect the source vertex to the target vertex. The scope of this paper is limited to directed graphs with non-weighted edges, which will be adequate for the purposes of this study. From the graph theoretic viewpoint, the shortest path between two vertices of a non-weighted directed graph is defined as a sub-graph which meets all of the following criteria: The sub-graph representing a path is both asymmetric and irreflexive. Each vertex, except the source and target vertices, must have in-degree of one and out-degree of one. The source vertex has in-degree of zero and out-degree of one. The target vertex has in-degree of one and out-degree of zero. The length of the shortest path is equal to the power of the adjacency matrix which has the first nonzero entry in the row and column locations as defined by the source and target vertices, respectively. Given the graph-theoretic constraints a path specification has to satisfy, the next step is to define the corresponding topological constraints of the GPSP. The topology of the network is modeled after the

8 8 adjacency matrix of a given graph, where each network node represents a graph edge. An active node at row r and column c indicates that the edge from vertex V r to vertex V c is included in the path specification. Since the shortest path is irreflexive, the nodes located along the main diagonal are clamped to zero as these nodes represent the hypothesis that the shortest path has a self-loop for vertex V i. In order to map the in-degree of zero constraint for the source vertex, the nodes in the column labeled by the source vertex are clamped to zero. Similarly, the nodes in the row labeled by the target vertex are clamped to zero to enforce the out-degree of zero constraint for the target vertex. The nodes in the locations where there is a zero entry in the associated adjacency matrix are also clamped to zero, since a zero entry in the adjacency matrix implies that the graph does not have an edge between the related vertices. An adjacency matrix has N entries for an N-vertex digraph. Given that certain nodes are clamped to zero, the network will have K, K < N, unclamped nodes. Note that the clamped nodes need not be included in the computations associated with the network simulations.. Definition of the Energy Function The asymmetry of a graph which represents the shortest path requires that one of the two entries, at most, located at symmetric positions with respect to the main diagonal of the adjacency matrix, be equal to one. Any two nodes in the network interact if the row index of node s i equals the column index of node s j and the column index of node s i equals the row index of node s j. Since only one of two interacting nodes can be equal to, the type of interaction between the nodes is inhibitory, g a R. The energy term for this inhibitory constraint is of the form: K Ea = gaδ ij a sis j, i= K j= where δ a ij = if row ( i) = col( j) and col( i) = row( j) ; otherwise δ ij a = 0 for i, j =,,..., K and i j. The subscript/superscript a indicates the asymmetry inhibition. Consider the network nodes in rows and columns not associated with the source or target vertices. If a vertex is included in the shortest path specification, then there is exactly one active node in both the row

9 9 and the column labeled by this vertex. On the other hand, there are no active nodes within the rows and columns labeled by vertices which do not belong to the path specification. Thus, when the network converges to a solution, there can be at most one node active per row or column. A digraph vertex, belonging to the path specification and having in-degree of one, implies the existence of a single in the associated column of the adjacency matrix. The condition for two nodes to interact under the column constraint is that the column index of node s i equals the column index of node s j. Similarly, a digraph vertex with an out-degree of one requires exactly a single to exist in the associated row of the adjacency matrix. Two nodes interact under the row constraint if the row index of node s i equals the row index of node s j. These constraints permit at most one of the nodes in a column or row to be active at any time; thus the type of interaction is inhibitory with g, g R. The energy term for the column inhibition constraint is defined by c r K Ec = gc δ ij c sis j, i= K j= c where = col( i) = col( j) δ ij if ; otherwise δ c ij = 0 for i, j =,,..., K and i j given that both col( i) and col( j) do not correspond to the target vertex column. Similarly, the energy term for row inhibition constraint is defined by Er = gr δ ijs s K i= K j= r i j r where = row( i) = row( j) δ ij if ; otherwise δ r ij = 0 for i, j =,,..., K and i j given that both row( i) and row( j) are not the source vertex row. Subscripts/superscripts r and c indicate the row and the column inhibition constraints, respectively. Since the source vertex must have out-degree of one, there is exactly one node active within the source vertex row for a solution array. Given the target vertex has in-degree of one, exactly one node is

10 0 active within the target vertex column for a solution array. These two constraints can be mapped employing the energy terms defined by Est = - g r - skl - gc - smn, l m where k represents the source vertex row and l is the index for the columns of the network in the first energy term, n represents the target vertex column and m is the index for the rows of the network in the second energy term, and gc R and gr R are the constraint weight parameters associated with the column and the row inhibition constraints, respectively. The in-degree and out-degree values for vertices of a solution graph depend on if a particular vertex belongs to the shortest path specification. Any vertex other than the source and the target vertices has both the in-degree and the out-degree equal to one if that vertex belongs to the path specification. The source vertex has in-degree of zero and out-degree of one. The target vertex has in-degree of one and outdegree of zero. A vertex not in the path specification has both the in-degree and the out-degree equal to zero. In terms of the adjacency matrix, a vertex with both the in-degree and the out-degree equal to one implies a in both the row and the column labeled by this vertex. In general, if there exists a in a particular row r and column c, then there must exist a in the corresponding column r and row c of the solution array. This constraint which enforces both the in-degree and the out-degree of a vertex within the path specification to be equal to will be decomposed into two sub-constraints, which are called the subconstraint column-to-row excitation and the sub-constraint row-to-column excitation, for ease of mapping to the network topology. The sub-constraint column-to-row excitation states that if the in-degree of vertex V i is one, then the out-degree of the same vertex must also be one. For the sub-constraint row-to-column excitation, if the outdegree of vertex V i is one, then the in-degree of the same vertex must also be one. In terms of the network topology, if the column sum associated with the vertex V i is equal to one, then the row sum associated with the same vertex must also be one for the sub-constraint column-to-row excitation. The nodes in the target vertex column do not belong to the interaction topology of this sub-constraint since the out-degree of the

11 target vertex is set to zero. Similarly, if the row sum associated with the vertex V i is one, then the column sum associated with the same vertex must be one for the sub-constraint row-to-column excitation. The nodes in the source vertex row do not interact under the sub-constraint row-to-column excitation since the nodes in the source vertex column are clamped to zero to set the in-degree of the source vertex to zero. Thus, two nodes, s i and s j, interact under sub-constraint column-to-row excitation if the column index of s i is equal to the row index of s j, and s i is not located within the target vertex column because the nodes in the target vertex row are clamped to zero. Similarly, two nodes, s i and s j, interact under subconstraint row-to-column excitation if the row index of s i is equal to the column index of s j and s i is not located within the source vertex row, since the nodes within the source vertex column are clamped to zero. The nodes of a given a row and column pair interact under the sub-constraint column-to-row excitation or the sub-constraint row-to-column excitation. Sub-constraint column-to-row excitation can be mapped by E cr = gd sr c - s cc + - s r c s cc, 4 c r c r c where r is the index for the rows and c, c are the indices for the columns of the network topology with the property that both c and c are not the target vertex column. Similarly, sub-constraint row-to-column excitation can be mapped by E rc = g d sr c - s r r + - s r c s rr, 4 r c r c r where c is the index for the columns and r, r are the indices for the rows of the network topology with the property that both r and r are not the source vertex row. These energy terms have a value of zero if the corresponding column/row sums are both equal to zero or both equal to one. The values of the energy terms are greater than zero if a column/row sum is zero

12 and the corresponding row/column sum is greater than zero. On the other hand, the energy term values become less than zero if a column/row sum is equal to one and the corresponding row/column sum is greater than one. Although the energy terms favor each of the interacting columns and rows having more than one node active, other constraints force the network dynamics to favor row and column sums with a value of one. A single constraint weight parameter for both sub-constraints, g d R +, will be employed given that two sub-constraints are obtained by decomposing the original constraint. Any solution path must be of minimum length. The length of the shortest path can be computed without knowing the actual path itself. The global inhibition constraint can be used to force the network to have M-out-of-K nodes active. The value of M is equal to the length of the shortest path less, since the nodes within the source vertex row and the target vertex column do not interact under this constraint and a solution array has one active node within both the source vertex row and the target vertex column. The following energy term is minimum when exactly M nodes are active, E = gγ si - M i γ, i =,,..., K, where row( i) and col( i) are not the source vertex row and the target vertex column, respectively. The energy function for the network is the algebraic sum of all individual energy terms and is given by E = Ea + Er + Ec + Est + Erc + Ecr + E γ. In accordance with the work by Serpen [9], the applicable bound on constraint weight parameters for solutions of the GPSP to be stable is given by g + g gγ g. (4) r c d Note that this inequality establishes a relationship between only the magnitudes of the constraint weight parameters.

13 3 3. Simulation Analysis The identification of set ordering relationships between two sets of interest, the solution set and the stable point set, is necessary to understand the convergence characteristics of the Hopfield network. The solution set represents all those output vectors that satisfy the constraints of a given problem and the stable point set consists of all those output vectors that are stable equilibrium points in the state space of the Hopfield network dynamics. It is important to observe that if the solution set is equal to the stable point set, then the Hopfield network will always converge to a solution point after each relaxation. Any other relationship between these two sets will either cause some solution points to be unstable (in the case where the stable point set is a proper subset of the solution point set) or some non-solution points to be stable (in the case where the solution point set is a proper subset of the stable point set). The latter is the only feasible case for the bounds given by Equation 4 since, by definition, bounds on the constraint weight parameters, as suggested by Abe [] and Serpen [9], establish stability of all solution points. 3. Description of the Simulation Study and Testing Methodology Evaluation of the network performance is realized by employing two techniques: simulation based relaxation study and mathematical stability analysis. The simulation based relaxation study is used to observe the time behavior of the network dynamics. Specifically, the rate at which the network converges to fixed points is used as the measure for the network performance. Simulation based relaxation study provides a statistical estimate of the convergence rate. Stability analysis of equilibrium points in the problem state space is used to identify the set of stable points. A second measure of network performance, the convergence ratio, is computed by algebraically dividing the number of stable solution points by the number of stable equilibrium points, using the findings of the mathematical stability analysis. Two network performance measures, the convergence rate and the convergence ratio, are expected to correlate to a very large degree by definition. Stability analysis of all equilibrium points in the problem state space is not computationally feasible for large problem sizes. An K-node network with discrete node dynamics will have K states in the K dimensional state space. This large number indicates that the upper limit for the problem size is the 5 5

14 4 node network for problems which require two dimensional arrays as network topologies, where no nodes are clamped to either zero or one. The parameter set employed in the study included the graph size, the connectivity of graph and the path length. Instances of the directed graphs which were employed in the simulation study and in the stability analysis were created by modifying: ) the number of vertices in the directed graph, ) the connectivity of the digraph, which is the ratio of the number of edges in the graph to the number of edges in the same graph when it is fully connected, and 3) the path length between the source and the target vertices of the digraph. Graph edges were randomly defined in order to establish the generality of the simulation results. A random variable uniform in the interval [0,], ρ, was used. An edge e i was included in the graph specification if the following inequality held: ρ i > Connectivity Level. To test the performance of the network, the number of vertices, the connectivity level and the desired path length were provided to the algorithm. The length of a path between any two pairs of vertices of the digraph was computed using the adjacency matrix, although the path itself was not known. A path length of less than three was not used in any of the test cases. The operating points for the network were generated using Equation 4. Relative magnitudes of constraint weight parameters rather than their absolute values are manipulated to generate a complete set of operating points which are presented in Table. Constraint Weight Operating Point Parameters gr = gc = ga g d g γ Table. Operating point definitions for the GPSP.

15 5 3. Simulation Results An initial evaluation of the network performance indicated that the convergence rate was less than 00% and varied significantly as the graph size, the operating point, the path length, the connectivity level, and the graph instances differed. Therefore, structured tests were run to better understand the relationship between the convergence rate and the variables in the parameter set. Additionally, the dependencies between the variables in the parameter set was also taken into consideration and the simulation study was modified accordingly. An example of this is the dependence of the path length on the graph size, the connectivity level, and the graph instance, where the path length decreases as the connectivity level increases. In the tests that follow, three out of four variables (the graph size, the connectivity level, the path length, and the operating point) are fixed and the convergence rate is observed as the fourth one varies. The fifth variable, the graph instance, can not be controlled since the graph edges are randomly created. The first test is run for the case where the operating point, the graph size, and the path length are fixed and the connectivity level is varied. The network performance strongly depends on the connectivity level as given in Figure. Specifically, the convergence rate and the convergence ratio decrease as the connectivity increases: the convergence rate and ratio drops from 50-60% to about 5% as the connectivity increases from 0. to 0.8. There is a high level of correlation between the convergence ratio and the convergence rate as the connectivity varies (the correlation coefficient is ). The decrease in the convergence ratio indicates that the cardinality of the stable solution set becomes much smaller than that of the stable solution set as the connectivity increases. As a result, the network does not always converge to a solution.

16 6 Performance vs. Connectivity (6-vertex, path length of 3, operating point 7) Convergence Rate Ratio Connectivity Figure. Performance analysis at various connectivity levels for the GPSP. The second test case involved the evaluation of the network performance as the operating point varied. This test was performed with the following parameter values: a digraph size of six vertices, a path length of 3, and a connectivity of 0.6. The same graph instance was used for all experiments. The size of the state space set and the number of solutions for this graph instance were 4096 and 3, respectively. Results presented in Figure indicate that the network performance significantly varied as the operating point changed. Operating point guided the network towards an almost 70% convergence rate while operating point 5 caused the network performance to drop down to approximately 0% convergence rate. Additionally, the convergence ratio is not highly correlated with the convergence rate. It is tempting to infer that initialization of the network and the shape and size of the basins of attraction associated with the stable equilibrium points played a significant role in the lack of correlation. Exhaustive stability evaluation of equilibrium points demonstrated that all solutions are stable.

17 7 Performance vs. Operating Point (6-vertex, path length of 3, connectivity 0.6) Convergence Operating Point Rate Ratio Figure. Performance analysis at various operating points for the GPSP. The third test case involved studying the variation of the convergence rate and the convergence ratio with respect to the variation in the graph size. In simulation studies, a graph connectivity level of 0.3 and a path length of half the number of vertices were used. For the graphs with an odd number of vertices, the path length was determined by rounding the computed value up to the next larger integer. The operating point employed in the experiments was number 8 in Table. Simulation results, presented in Figure 3, indicate that the network performance degraded drastically (convergence rate drops from 70% to about 0% for a minimal increase in the vertex count from 6 to 0) as the graph size increased. Similarly, the convergence ratio dropped from 30% to % for the same increase in the graph size, which is highly correlated with the variations in the convergence rate. Performance vs. Problem Size (operating point 8, connectivity 0.3) Convergence Rate Ratio Vertex Count Figure 3. Performance analysis for various GPSP sizes.

18 8 The next test case was set up to observe the change in the convergence rate with respect to the changes in the path length and the connectivity level for a 0-vertex graph. The connectivity level and the path length varied in the ranges [0., 0.8] and [4, 8], respectively. Operating point number 8 in Table was employed for this test case. The results are summarized in Table. The symbol " " in a box located at row r and column c means a 0-vertex graph with a path length of c and a connectivity of r could not be generated in 00 attempts. Connectivity Path Length Level Table. Convergence rate vs. path length connectivity level for the GPSP. The data in Table demonstrates that the convergence rate is significantly higher for low values of the graph connectivity or the path length. As an example, the convergence rate is 56% for a connectivity level of 0.3 and a path length of 4. The convergence rate drops to % for a path length of 8 and the same connectivity level. Simulation results presented in Table correlate with findings in Figures and 3 to a large degree. Exhaustive stability analysis was not conducted for this test since the 0-vertex problem with 0.8 connectivity level generates approximately bit binary vectors to evaluate. Another study analyzed the performance of the network with respect to variations in the graph instances. A graph with 0 vertices, 0.3 connectivity, and a path length of 5 was employed. The constraint weight parameters were set in accordance with the operating point number 8 in Table. Analysis of the data in Figure 4 indicates that the network performance varies considerably as the graph instance changes for fixed values of the operating point, the connectivity level, the path length, and the graph size.

19 9 Performance vs. Graph Instance Number of Trials 3 0 [.04,.06) [.06,.08) [.08,.0) [.0,.) [.,.4) [.4,.6) [.6,.8) Convergence Rate Interval Figure 4. Convergence rate vs. graph instance for the GPSP. Simulation studies indicate that the performance of the network depends heavily on the operating point choice, the problem size, the path length, and the connectivity of the graphs. The exhaustive mathematical stability analysis confirmed that the stable equilibrium point set included all solution points and often a significant number of non-solution points. The number of non-solution equilibrium points which are stable depended on values of the operating point, the problem size, the graph connectivity, path length, and the graph instance. Therefore, as a combination of those parameters were changed, the cardinality of the stable equilibrium point set, which necessarily included all solutions, also changed as reflected by the values of the convergence ratio parameter. The convergence ratio showed a good degree of correlation with the convergence rate and helped explain the results obtained through simulation. 3.. Performance vs. Problem Size The performance of the network for graph sizes of up to, and including, 50 vertices was next studied. The previous work indicates the convergence rate is likely to drop significantly as the graph size increases. Thus, the variables (the graph connectivity, the path length and the operating point) are set to maximize the convergence rate so that the network is likely to converge to a solution without requiring unreasonable simulation time. Earlier simulation work shows the convergence rate is the highest for low graph connectivity, the operating point given by number in Table, and low path lengths. The operating

20 0 point used in the testing of 0 through 50 vertex graphs is very close to number in the constraint weight parameter space and defined by number 8 in Table. In order to save computational effort, the tests were run until the network converged to a solution. The simulation was then terminated by recording the number of relaxations attempting to converge to the solution, rather than running the simulation for 00 relaxations. Additionally, not all test cases in the test matrices were implemented since they would, otherwise, require a very large amount of computation. In all the tables that follow, ">00" indicates the network is likely to require more than 00 iterations to locate a stable solution. The symbol " " denotes that a graph with corresponding specifications could not be generated within 00 attempts and thus, no test results are available. The blank boxes represent the cases where no testing was done. The performance analysis of the network configured to solve the 0, 30, 40, and 50-vertex GPSP are presented in Tables 3, 4, 5, and 6, respectively. Simulation data indicates that the network performance degrades significantly as the problem size increases. The network also occasionally manages to locate a solution. Earlier studies on smaller size graphs showed that the size of the stable equilibrium point set increased as the problem size increased, which helped explained the significant drops in the convergence rate value. The network attempted 0 relaxations for the 30-vertex GPSP with a connectivity level of 0.05 and a path length of 7, Table 4, before locating a solution. In other words, it converged to 9 non-solution stable equilibrium points before it was able to converge to a solution. The same network required 73 relaxations for the 40-vertex GPSP with a connectivity of 0.0 and a path length of 5, Table 5, to converge to a solution. The increase in the number of non-solution stable equilibrium points as the problem size grows is most likely to be the reason for the degradation of the network performance.

21 Connectivity Path Length Level > Table 3. Iterations for convergence to a solution for the 0-vertex GPSP. Connectivity Path Length Level > > Table 4. Iterations for convergence to a solution for the 30-vertex GPSP. Connectivity Path Length Level > Table 5. Iterations for convergence to a solution for the 40-vertex GPSP. Connectivity Path Length Level >00 > > Table 6. Iterations for convergence to a solution for the 50-vertex GPSP.

22 4. Conclusions Simulation results show that the network algorithm does not scale well with the increase in the size of the problem. The mathematical stability analysis of equilibrium points in the state space of the problem indicates that the stable point set includes all the solutions as well as some non-solution points. The same analysis also shows that the number of non-solution stable equilibrium points become much larger than the number of solutions in the stable point set as the size of the problem increases. Therefore, the network becomes more likely to converge to non-solution stable equilibrium points as the problem size increases. Even though many non-solution equilibrium points become stable for bounds on constraint weight parameters for solutions to be stable, the size of the stable point set is only a very small fraction of the size of the equilibrium point set. This indicates the established bounds on constraint weight parameters confine the search effort to a very small subspace of the problem state space. The topology of the network and the second-order Liapunov function in its generic form as defined by Hopfield are not adequate to guide the network to a solution during each relaxation. There are a number of areas which might be studied further to improve the existing Hopfield network algorithm. The problem representation and the definition of the problem specific Liapunov (energy) function could be further studied. The idea behind a new and better representation for the problem is to discover a form for the Liapunov function which will result in the two sets of interest being equal, namely the stable point set and the solution set. Results of this work confirmed once more that a quick and average quality solution is the promise of the Hopfield network. Global search algorithms like the simulated annealing and genetic programming are shown to be better for problems where near-optimal solutions are needed []. There are also efforts in the literature in terms of combining the Hopfield network paradigm, which is a local search algorithm, with a genetic programming paradigm, which is a global search algorithm, in order to benefit from desirable features of both paradigms [0]. Acknowledgment We wish to express our appreciation to the anonymous referees of this paper for their valuable feedback.

23 3 6. References [] S. Abe, J. Kawakama & K. Hirasawa, Solving Inequality Constrained Combinatorial Optimization Problems by the Hopfield Neural Networks, Neural Networks 5 (99) [] S. V. B. Aiyer, M. Niranjan & F. Fallside, A Theoretical Investigation into the Performance of the Hopfield Model IEEE Transactions on Neural Networks (990) [3] M. K. M. Ali & F. Kamoun, Neural Networks for Shortest Path Computation and Routing in Computer Networks, IEEE Transactions on Neural Networks 4 (993) [4] A. Bouzerdoum & T. Pattison, Neural Network for Quadratic Optimization with Bound Constraints, IEEE Transactions on Neural Networks 4 (993) [5] S. Cavalieri et al., Optimal Path Determination in a Graph by Hopfield Neural Network, Neural Networks 7 () (994) [6] C. Chiu, Y. Maa & M. A. Shanblatt, Energy Function Analysis of Dynamic Programming Neural Networks, IEEE Transactions on Neural Networks 4 (99) [ 7] B. J. Hellstrom & L. N. Kanal, Knapsack Packing Networks, IEEE Transactions on Neural Networks 3 (99) [8] J. J. Hopfield & D. W. Tank, Computing with Neural Networks: A Model, Science 33 (986) [9] J. J. Hopfield & D. W. Tank, Neural Computations of Decisions in Optimization Problems, Biological Cybernetics 5 (985) 4-5. [0] J. J. Hopfield, Neurons with Graded Response have Collective Computational Properties like those of Two-State Neurons, Proceedings of the National Academy of Sciences (U.S.A.) 8 (984) [] J. J. Hopfield, Neural Networks and Physical Systems with Emergent Collective Computational Properties, Proceedings of the National Academy of Sciences (U.S.A.) 79 (98) [] Y. Kobuchi, State Evaluation Functions and Lyapunov Functions for Neural Networks, Neural Networks 4 (99) [3] B. W. Lee & B. J. Sheu, Modified Hopfield Neural Networks for Retrieving the Optimal Solution IEEE Transactions on Neural Networks (99) [4] W. E. Lillo, M. H. Loh, S. Hui & S. H. Zak, On Solving Constrained Optimization Problems with Neural Networks: A Penalty Method Approach IEEE Transactions on Neural Networks 4 (993) [5] W. Lin, J. G. Delgado-Frias, G. G. Pechanek, & S. Vassiliadis, Impact of Energy Function on a Neural Network for Optimization Problems, IEEE World Congress on Computational Intelligence (994) [6] F-L. Luo and Y-D. Li, "A Theorem Concerning the Energy Function of Hopfield Continuous-Variable Neural Networks," International Journal of Electronics, Vol. 76, No. 3, pp , 994.

24 4 [7] G. Serpen & D. L. Livingston, Analysis of the Relationship between Weight Parameters and Stability of Solutions in Hopfield Networks from Dynamic Systems Viewpoint, Neural, Parallel & Scientific Computation (994) [8] G. Serpen & D. L. Livingston, Bounds on the Weight Parameters of Hopfield Networks for the Stability of Solutions, Submitted to a Journal. [9] G. Serpen, Bounds on Constraint Weight Parameters of Hopfield Networks for Stability of Optimization Problem Solutions, Ph. D. Dissertation, Old Dominion University, Norfolk, VA, 99. [0] H. Shirai H & et al., A Solution of Combinatorial Optimization Problem by Uniting Genetic Algorithms with Hopfield's Model, IEEE World Congress on Computational Intelligence (994) [] Y. Shrivastava, S. Dasgupta, & S. M. Reddy, Guaranteed Convergence in a Class of Hopfield Networks, IEEE Transactions on Neural Networks 3 (99) [] M. A. Styblinsky & T. S. Tang, Experiments in Nonconvex Optimization: Stochastic Approximation with Function Smoothing and Simulated Annealing, Neural Networks 3 (990) [3] K. T. Sun & H. C. Fu, A Hybrid Neural Network Model for Solving Optimization Problems, IEEE Transactions on Computers 4 (993) 8-7. [4] M. Vidyasagar, Location and Stability of the High-Gain Equilibria of Nonlinear Neural Networks, IEEE Transactions on Neural Networks 4 (993)

A B. A: sigmoid B: EBA (x0=0.03) C: EBA (x0=0.05) U

A B. A: sigmoid B: EBA (x0=0.03) C: EBA (x0=0.05) U Extending the Power and Capacity of Constraint Satisfaction Networks nchuan Zeng and Tony R. Martinez Computer Science Department, Brigham Young University, Provo, Utah 8460 Email: zengx@axon.cs.byu.edu,

More information

Parallel Evaluation of Hopfield Neural Networks

Parallel Evaluation of Hopfield Neural Networks Parallel Evaluation of Hopfield Neural Networks Antoine Eiche, Daniel Chillet, Sebastien Pillement and Olivier Sentieys University of Rennes I / IRISA / INRIA 6 rue de Kerampont, BP 818 2232 LANNION,FRANCE

More information

Improving the Hopfield Network through Beam Search

Improving the Hopfield Network through Beam Search Brigham Young University BYU ScholarsArchive All Faculty Publications 2001-07-19 Improving the Hopfield Network through Beam Search Tony R. Martinez martinez@cs.byu.edu Xinchuan Zeng Follow this and additional

More information

Metaheuristic Optimization with Evolver, Genocop and OptQuest

Metaheuristic Optimization with Evolver, Genocop and OptQuest Metaheuristic Optimization with Evolver, Genocop and OptQuest MANUEL LAGUNA Graduate School of Business Administration University of Colorado, Boulder, CO 80309-0419 Manuel.Laguna@Colorado.EDU Last revision:

More information

I How does the formulation (5) serve the purpose of the composite parameterization

I How does the formulation (5) serve the purpose of the composite parameterization Supplemental Material to Identifying Alzheimer s Disease-Related Brain Regions from Multi-Modality Neuroimaging Data using Sparse Composite Linear Discrimination Analysis I How does the formulation (5)

More information

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used. 1 4.12 Generalization In back-propagation learning, as many training examples as possible are typically used. It is hoped that the network so designed generalizes well. A network generalizes well when

More information

Data Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Data Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Data Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Search & Optimization Search and Optimization method deals with

More information

Simplicial Global Optimization

Simplicial Global Optimization Simplicial Global Optimization Julius Žilinskas Vilnius University, Lithuania September, 7 http://web.vu.lt/mii/j.zilinskas Global optimization Find f = min x A f (x) and x A, f (x ) = f, where A R n.

More information

A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2

A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2 Chapter 5 A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2 Graph Matching has attracted the exploration of applying new computing paradigms because of the large number of applications

More information

3 INTEGER LINEAR PROGRAMMING

3 INTEGER LINEAR PROGRAMMING 3 INTEGER LINEAR PROGRAMMING PROBLEM DEFINITION Integer linear programming problem (ILP) of the decision variables x 1,..,x n : (ILP) subject to minimize c x j j n j= 1 a ij x j x j 0 x j integer n j=

More information

Solving the Large Scale Next Release Problem with a Backbone Based Multilevel Algorithm

Solving the Large Scale Next Release Problem with a Backbone Based Multilevel Algorithm IEEE TRANSACTIONS ON JOURNAL NAME, MANUSCRIPT ID 1 Solving the Large Scale Next Release Problem with a Backbone Based Multilevel Algorithm Jifeng Xuan, He Jiang, Member, IEEE, Zhilei Ren, and Zhongxuan

More information

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra Pattern Recall Analysis of the Hopfield Neural Network with a Genetic Algorithm Susmita Mohapatra Department of Computer Science, Utkal University, India Abstract: This paper is focused on the implementation

More information

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles INTERNATIONAL JOURNAL OF MATHEMATICS MODELS AND METHODS IN APPLIED SCIENCES Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles M. Fernanda P.

More information

Notes for Lecture 24

Notes for Lecture 24 U.C. Berkeley CS170: Intro to CS Theory Handout N24 Professor Luca Trevisan December 4, 2001 Notes for Lecture 24 1 Some NP-complete Numerical Problems 1.1 Subset Sum The Subset Sum problem is defined

More information

Machine Learning for Software Engineering

Machine Learning for Software Engineering Machine Learning for Software Engineering Introduction and Motivation Prof. Dr.-Ing. Norbert Siegmund Intelligent Software Systems 1 2 Organizational Stuff Lectures: Tuesday 11:00 12:30 in room SR015 Cover

More information

Randomized rounding of semidefinite programs and primal-dual method for integer linear programming. Reza Moosavi Dr. Saeedeh Parsaeefard Dec.

Randomized rounding of semidefinite programs and primal-dual method for integer linear programming. Reza Moosavi Dr. Saeedeh Parsaeefard Dec. Randomized rounding of semidefinite programs and primal-dual method for integer linear programming Dr. Saeedeh Parsaeefard 1 2 3 4 Semidefinite Programming () 1 Integer Programming integer programming

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

D-Optimal Designs. Chapter 888. Introduction. D-Optimal Design Overview

D-Optimal Designs. Chapter 888. Introduction. D-Optimal Design Overview Chapter 888 Introduction This procedure generates D-optimal designs for multi-factor experiments with both quantitative and qualitative factors. The factors can have a mixed number of levels. For example,

More information

TRAINING SIMULTANEOUS RECURRENT NEURAL NETWORK WITH RESILIENT PROPAGATION FOR STATIC OPTIMIZATION

TRAINING SIMULTANEOUS RECURRENT NEURAL NETWORK WITH RESILIENT PROPAGATION FOR STATIC OPTIMIZATION TRAIIG SIMULTAEOUS RECURRET EURAL ETWORK WITH RESILIET PROPAGATIO FOR STATIC OPTIMIZATIO Gursel Serpen * and Joel Corra Electrical Engineering and Computer Science, The University of Toledo, Toledo, OH

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India. Volume 3, Issue 3, March 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Training Artificial

More information

The Heuristic Strategy Implementation to the Hopfield -Tank TSP Neural Algorithm

The Heuristic Strategy Implementation to the Hopfield -Tank TSP Neural Algorithm The Heuristic Strategy Implementation to the Hopfield -Tank TSP Neural Algorithm N. Kovač, S. Bauk Faculty of Maritime Studies, University of Montenegro Dobrota 36, 85 330 Kotor, Serbia and Montenegro

More information

Rules for Identifying the Initial Design Points for Use in the Quick Convergent Inflow Algorithm

Rules for Identifying the Initial Design Points for Use in the Quick Convergent Inflow Algorithm International Journal of Statistics and Probability; Vol. 5, No. 1; 2016 ISSN 1927-7032 E-ISSN 1927-7040 Published by Canadian Center of Science and Education Rules for Identifying the Initial Design for

More information

Website: HOPEFIELD NETWORK. Inderjeet Singh Behl, Ankush Saini, Jaideep Verma. ID-

Website:   HOPEFIELD NETWORK. Inderjeet Singh Behl, Ankush Saini, Jaideep Verma.  ID- International Journal Of Scientific Research And Education Volume 1 Issue 7 Pages 154-162 2013 ISSN (e): 2321-7545 Website: http://ijsae.in HOPEFIELD NETWORK Inderjeet Singh Behl, Ankush Saini, Jaideep

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A 4 credit unit course Part of Theoretical Computer Science courses at the Laboratory of Mathematics There will be 4 hours

More information

Research Interests Optimization:

Research Interests Optimization: Mitchell: Research interests 1 Research Interests Optimization: looking for the best solution from among a number of candidates. Prototypical optimization problem: min f(x) subject to g(x) 0 x X IR n Here,

More information

Performance Evaluation of an Interior Point Filter Line Search Method for Constrained Optimization

Performance Evaluation of an Interior Point Filter Line Search Method for Constrained Optimization 6th WSEAS International Conference on SYSTEM SCIENCE and SIMULATION in ENGINEERING, Venice, Italy, November 21-23, 2007 18 Performance Evaluation of an Interior Point Filter Line Search Method for Constrained

More information

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch.

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch. Iterative Improvement Algorithm design technique for solving optimization problems Start with a feasible solution Repeat the following step until no improvement can be found: change the current feasible

More information

Local search. Heuristic algorithms. Giovanni Righini. University of Milan Department of Computer Science (Crema)

Local search. Heuristic algorithms. Giovanni Righini. University of Milan Department of Computer Science (Crema) Local search Heuristic algorithms Giovanni Righini University of Milan Department of Computer Science (Crema) Exchange algorithms In Combinatorial Optimization every solution x is a subset of E An exchange

More information

Modeling with Uncertainty Interval Computations Using Fuzzy Sets

Modeling with Uncertainty Interval Computations Using Fuzzy Sets Modeling with Uncertainty Interval Computations Using Fuzzy Sets J. Honda, R. Tankelevich Department of Mathematical and Computer Sciences, Colorado School of Mines, Golden, CO, U.S.A. Abstract A new method

More information

Neural Network Weight Matrix Synthesis Using Optimal Control Techniques

Neural Network Weight Matrix Synthesis Using Optimal Control Techniques 348 Farotimi, Demho and Kailath Neural Network Weight Matrix Synthesis Using Optimal Control Techniques O. Farotimi A. Dembo Information Systems ab. Electrical Engineering Dept. Stanford University, Stanford,

More information

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem L. De Giovanni M. Di Summa The Traveling Salesman Problem (TSP) is an optimization problem on a directed

More information

Instantaneously trained neural networks with complex inputs

Instantaneously trained neural networks with complex inputs Louisiana State University LSU Digital Commons LSU Master's Theses Graduate School 2003 Instantaneously trained neural networks with complex inputs Pritam Rajagopal Louisiana State University and Agricultural

More information

Random projection for non-gaussian mixture models

Random projection for non-gaussian mixture models Random projection for non-gaussian mixture models Győző Gidófalvi Department of Computer Science and Engineering University of California, San Diego La Jolla, CA 92037 gyozo@cs.ucsd.edu Abstract Recently,

More information

Kapitel 5: Local Search

Kapitel 5: Local Search Inhalt: Kapitel 5: Local Search Gradient Descent (Hill Climbing) Metropolis Algorithm and Simulated Annealing Local Search in Hopfield Neural Networks Local Search for Max-Cut Single-flip neighborhood

More information

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION 6 NEURAL NETWORK BASED PATH PLANNING ALGORITHM 61 INTRODUCTION In previous chapters path planning algorithms such as trigonometry based path planning algorithm and direction based path planning algorithm

More information

Recent Developments in Model-based Derivative-free Optimization

Recent Developments in Model-based Derivative-free Optimization Recent Developments in Model-based Derivative-free Optimization Seppo Pulkkinen April 23, 2010 Introduction Problem definition The problem we are considering is a nonlinear optimization problem with constraints:

More information

1 Linear programming relaxation

1 Linear programming relaxation Cornell University, Fall 2010 CS 6820: Algorithms Lecture notes: Primal-dual min-cost bipartite matching August 27 30 1 Linear programming relaxation Recall that in the bipartite minimum-cost perfect matching

More information

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject

More information

Optimal Detector Locations for OD Matrix Estimation

Optimal Detector Locations for OD Matrix Estimation Optimal Detector Locations for OD Matrix Estimation Ying Liu 1, Xiaorong Lai, Gang-len Chang 3 Abstract This paper has investigated critical issues associated with Optimal Detector Locations for OD matrix

More information

NUMERICAL METHODS PERFORMANCE OPTIMIZATION IN ELECTROLYTES PROPERTIES MODELING

NUMERICAL METHODS PERFORMANCE OPTIMIZATION IN ELECTROLYTES PROPERTIES MODELING NUMERICAL METHODS PERFORMANCE OPTIMIZATION IN ELECTROLYTES PROPERTIES MODELING Dmitry Potapov National Research Nuclear University MEPHI, Russia, Moscow, Kashirskoe Highway, The European Laboratory for

More information

Network Topology Control and Routing under Interface Constraints by Link Evaluation

Network Topology Control and Routing under Interface Constraints by Link Evaluation Network Topology Control and Routing under Interface Constraints by Link Evaluation Mehdi Kalantari Phone: 301 405 8841, Email: mehkalan@eng.umd.edu Abhishek Kashyap Phone: 301 405 8843, Email: kashyap@eng.umd.edu

More information

LOW-DENSITY PARITY-CHECK (LDPC) codes [1] can

LOW-DENSITY PARITY-CHECK (LDPC) codes [1] can 208 IEEE TRANSACTIONS ON MAGNETICS, VOL 42, NO 2, FEBRUARY 2006 Structured LDPC Codes for High-Density Recording: Large Girth and Low Error Floor J Lu and J M F Moura Department of Electrical and Computer

More information

Generating (n,2) De Bruijn Sequences with Some Balance and Uniformity Properties. Abstract

Generating (n,2) De Bruijn Sequences with Some Balance and Uniformity Properties. Abstract Generating (n,) De Bruijn Sequences with Some Balance and Uniformity Properties Yi-Chih Hsieh, Han-Suk Sohn, and Dennis L. Bricker Department of Industrial Management, National Huwei Institute of Technology,

More information

A Genetic Algorithm Framework

A Genetic Algorithm Framework Fast, good, cheap. Pick any two. The Project Triangle 3 A Genetic Algorithm Framework In this chapter, we develop a genetic algorithm based framework to address the problem of designing optimal networks

More information

General properties of staircase and convex dual feasible functions

General properties of staircase and convex dual feasible functions General properties of staircase and convex dual feasible functions JÜRGEN RIETZ, CLÁUDIO ALVES, J. M. VALÉRIO de CARVALHO Centro de Investigação Algoritmi da Universidade do Minho, Escola de Engenharia

More information

56:272 Integer Programming & Network Flows Final Exam -- December 16, 1997

56:272 Integer Programming & Network Flows Final Exam -- December 16, 1997 56:272 Integer Programming & Network Flows Final Exam -- December 16, 1997 Answer #1 and any five of the remaining six problems! possible score 1. Multiple Choice 25 2. Traveling Salesman Problem 15 3.

More information

Complete Local Search with Memory

Complete Local Search with Memory Complete Local Search with Memory Diptesh Ghosh Gerard Sierksma SOM-theme A Primary Processes within Firms Abstract Neighborhood search heuristics like local search and its variants are some of the most

More information

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 20 Dr. Ted Ralphs IE406 Lecture 20 1 Reading for This Lecture Bertsimas Sections 10.1, 11.4 IE406 Lecture 20 2 Integer Linear Programming An integer

More information

Job Shop Scheduling Problem (JSSP) Genetic Algorithms Critical Block and DG distance Neighbourhood Search

Job Shop Scheduling Problem (JSSP) Genetic Algorithms Critical Block and DG distance Neighbourhood Search A JOB-SHOP SCHEDULING PROBLEM (JSSP) USING GENETIC ALGORITHM (GA) Mahanim Omar, Adam Baharum, Yahya Abu Hasan School of Mathematical Sciences, Universiti Sains Malaysia 11800 Penang, Malaysia Tel: (+)

More information

Simultaneous Perturbation Stochastic Approximation Algorithm Combined with Neural Network and Fuzzy Simulation

Simultaneous Perturbation Stochastic Approximation Algorithm Combined with Neural Network and Fuzzy Simulation .--- Simultaneous Perturbation Stochastic Approximation Algorithm Combined with Neural Networ and Fuzzy Simulation Abstract - - - - Keywords: Many optimization problems contain fuzzy information. Possibility

More information

Neural Network Weight Selection Using Genetic Algorithms

Neural Network Weight Selection Using Genetic Algorithms Neural Network Weight Selection Using Genetic Algorithms David Montana presented by: Carl Fink, Hongyi Chen, Jack Cheng, Xinglong Li, Bruce Lin, Chongjie Zhang April 12, 2005 1 Neural Networks Neural networks

More information

Mathematics Scope & Sequence Grade 8 Revised: June 2015

Mathematics Scope & Sequence Grade 8 Revised: June 2015 Mathematics Scope & Sequence 2015-16 Grade 8 Revised: June 2015 Readiness Standard(s) First Six Weeks (29 ) 8.2D Order a set of real numbers arising from mathematical and real-world contexts Convert between

More information

Chapter S:II. II. Search Space Representation

Chapter S:II. II. Search Space Representation Chapter S:II II. Search Space Representation Systematic Search Encoding of Problems State-Space Representation Problem-Reduction Representation Choosing a Representation S:II-1 Search Space Representation

More information

Recurrent Neural Network Models for improved (Pseudo) Random Number Generation in computer security applications

Recurrent Neural Network Models for improved (Pseudo) Random Number Generation in computer security applications Recurrent Neural Network Models for improved (Pseudo) Random Number Generation in computer security applications D.A. Karras 1 and V. Zorkadis 2 1 University of Piraeus, Dept. of Business Administration,

More information

REGULAR GRAPHS OF GIVEN GIRTH. Contents

REGULAR GRAPHS OF GIVEN GIRTH. Contents REGULAR GRAPHS OF GIVEN GIRTH BROOKE ULLERY Contents 1. Introduction This paper gives an introduction to the area of graph theory dealing with properties of regular graphs of given girth. A large portion

More information

Khushboo Arora, Samiksha Agarwal, Rohit Tanwar

Khushboo Arora, Samiksha Agarwal, Rohit Tanwar International Journal of Scientific & Engineering Research, Volume 7, Issue 1, January-2016 1014 Solving TSP using Genetic Algorithm and Nearest Neighbour Algorithm and their Comparison Khushboo Arora,

More information

Fuzzy Inspired Hybrid Genetic Approach to Optimize Travelling Salesman Problem

Fuzzy Inspired Hybrid Genetic Approach to Optimize Travelling Salesman Problem Fuzzy Inspired Hybrid Genetic Approach to Optimize Travelling Salesman Problem Bindu Student, JMIT Radaur binduaahuja@gmail.com Mrs. Pinki Tanwar Asstt. Prof, CSE, JMIT Radaur pinki.tanwar@gmail.com Abstract

More information

IMPLEMENTATION OF A FIXING STRATEGY AND PARALLELIZATION IN A RECENT GLOBAL OPTIMIZATION METHOD

IMPLEMENTATION OF A FIXING STRATEGY AND PARALLELIZATION IN A RECENT GLOBAL OPTIMIZATION METHOD IMPLEMENTATION OF A FIXING STRATEGY AND PARALLELIZATION IN A RECENT GLOBAL OPTIMIZATION METHOD Figen Öztoprak, Ş.İlker Birbil Sabancı University Istanbul, Turkey figen@su.sabanciuniv.edu, sibirbil@sabanciuniv.edu

More information

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1 Cluster Analysis Mu-Chun Su Department of Computer Science and Information Engineering National Central University 2003/3/11 1 Introduction Cluster analysis is the formal study of algorithms and methods

More information

6 Randomized rounding of semidefinite programs

6 Randomized rounding of semidefinite programs 6 Randomized rounding of semidefinite programs We now turn to a new tool which gives substantially improved performance guarantees for some problems We now show how nonlinear programming relaxations can

More information

TOSSIM simulation of wireless sensor network serving as hardware platform for Hopfield neural net configured for max independent set

TOSSIM simulation of wireless sensor network serving as hardware platform for Hopfield neural net configured for max independent set Available online at www.sciencedirect.com Procedia Computer Science 6 (2011) 408 412 Complex Adaptive Systems, Volume 1 Cihan H. Dagli, Editor in Chief Conference Organized by Missouri University of Science

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours

More information

Optimization by Varied Beam Search in Hopfield Networks

Optimization by Varied Beam Search in Hopfield Networks Brigham Young University BYU ScholarsArchive All Faculty Publications 2002-05-17 Optimization by Varied Beam Search in Hopfield Networks Tony R. Martinez martinez@cs.byu.edu Xinchuan Zeng Follow this and

More information

European Journal of Science and Engineering Vol. 1, Issue 1, 2013 ADAPTIVE NEURO-FUZZY INFERENCE SYSTEM IDENTIFICATION OF AN INDUCTION MOTOR

European Journal of Science and Engineering Vol. 1, Issue 1, 2013 ADAPTIVE NEURO-FUZZY INFERENCE SYSTEM IDENTIFICATION OF AN INDUCTION MOTOR ADAPTIVE NEURO-FUZZY INFERENCE SYSTEM IDENTIFICATION OF AN INDUCTION MOTOR Ahmed A. M. Emam College of Engineering Karrary University SUDAN ahmedimam1965@yahoo.co.in Eisa Bashier M. Tayeb College of Engineering

More information

A NEW HEURISTIC ALGORITHM FOR MULTIPLE TRAVELING SALESMAN PROBLEM

A NEW HEURISTIC ALGORITHM FOR MULTIPLE TRAVELING SALESMAN PROBLEM TWMS J. App. Eng. Math. V.7, N.1, 2017, pp. 101-109 A NEW HEURISTIC ALGORITHM FOR MULTIPLE TRAVELING SALESMAN PROBLEM F. NURIYEVA 1, G. KIZILATES 2, Abstract. The Multiple Traveling Salesman Problem (mtsp)

More information

Fast-Lipschitz Optimization

Fast-Lipschitz Optimization Fast-Lipschitz Optimization DREAM Seminar Series University of California at Berkeley September 11, 2012 Carlo Fischione ACCESS Linnaeus Center, Electrical Engineering KTH Royal Institute of Technology

More information

February 19, Integer programming. Outline. Problem formulation. Branch-andbound

February 19, Integer programming. Outline. Problem formulation. Branch-andbound Olga Galinina olga.galinina@tut.fi ELT-53656 Network Analysis and Dimensioning II Department of Electronics and Communications Engineering Tampere University of Technology, Tampere, Finland February 19,

More information

Learning Graph Grammars

Learning Graph Grammars Learning Graph Grammars 1 Aly El Gamal ECE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Abstract Discovery of patterns in a given graph - in the form of repeated

More information

MCL. (and other clustering algorithms) 858L

MCL. (and other clustering algorithms) 858L MCL (and other clustering algorithms) 858L Comparing Clustering Algorithms Brohee and van Helden (2006) compared 4 graph clustering algorithms for the task of finding protein complexes: MCODE RNSC Restricted

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

MATH3016: OPTIMIZATION

MATH3016: OPTIMIZATION MATH3016: OPTIMIZATION Lecturer: Dr Huifu Xu School of Mathematics University of Southampton Highfield SO17 1BJ Southampton Email: h.xu@soton.ac.uk 1 Introduction What is optimization? Optimization is

More information

This leads to our algorithm which is outlined in Section III, along with a tabular summary of it's performance on several benchmarks. The last section

This leads to our algorithm which is outlined in Section III, along with a tabular summary of it's performance on several benchmarks. The last section An Algorithm for Incremental Construction of Feedforward Networks of Threshold Units with Real Valued Inputs Dhananjay S. Phatak Electrical Engineering Department State University of New York, Binghamton,

More information

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant

More information

5 Machine Learning Abstractions and Numerical Optimization

5 Machine Learning Abstractions and Numerical Optimization Machine Learning Abstractions and Numerical Optimization 25 5 Machine Learning Abstractions and Numerical Optimization ML ABSTRACTIONS [some meta comments on machine learning] [When you write a large computer

More information

A Hopfield Neural Network Model for the Outerplanar Drawing Problem

A Hopfield Neural Network Model for the Outerplanar Drawing Problem IAENG International Journal of Computer Science, 32:4, IJCS_32_4_17 A Hopfield Neural Network Model for the Outerplanar Drawing Problem Hongmei. He, Ondrej. Sýkora Abstract In the outerplanar (other alternate

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

TELCOM2125: Network Science and Analysis

TELCOM2125: Network Science and Analysis School of Information Sciences University of Pittsburgh TELCOM2125: Network Science and Analysis Konstantinos Pelechrinis Spring 2015 2 Part 4: Dividing Networks into Clusters The problem l Graph partitioning

More information

On the Computational Complexity of Nash Equilibria for (0, 1) Bimatrix Games

On the Computational Complexity of Nash Equilibria for (0, 1) Bimatrix Games On the Computational Complexity of Nash Equilibria for (0, 1) Bimatrix Games Bruno Codenotti Daniel Štefankovič Abstract The computational complexity of finding a Nash equilibrium in a nonzero sum bimatrix

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

An Integer Programming Approach to Packing Lightpaths on WDM Networks 파장분할다중화망의광경로패킹에대한정수계획해법. 1. Introduction

An Integer Programming Approach to Packing Lightpaths on WDM Networks 파장분할다중화망의광경로패킹에대한정수계획해법. 1. Introduction Journal of the Korean Institute of Industrial Engineers Vol. 32, No. 3, pp. 219-225, September 2006. An Integer Programming Approach to Packing Lightpaths on WDM Networks Kyungsik Lee 1 Taehan Lee 2 Sungsoo

More information

ACO and other (meta)heuristics for CO

ACO and other (meta)heuristics for CO ACO and other (meta)heuristics for CO 32 33 Outline Notes on combinatorial optimization and algorithmic complexity Construction and modification metaheuristics: two complementary ways of searching a solution

More information

Practice Final Exam 1

Practice Final Exam 1 Algorithm esign Techniques Practice Final xam Instructions. The exam is hours long and contains 6 questions. Write your answers clearly. You may quote any result/theorem seen in the lectures or in the

More information

Numerical Experiments with a Population Shrinking Strategy within a Electromagnetism-like Algorithm

Numerical Experiments with a Population Shrinking Strategy within a Electromagnetism-like Algorithm Numerical Experiments with a Population Shrinking Strategy within a Electromagnetism-like Algorithm Ana Maria A. C. Rocha and Edite M. G. P. Fernandes Abstract This paper extends our previous work done

More information

On the packing chromatic number of some lattices

On the packing chromatic number of some lattices On the packing chromatic number of some lattices Arthur S. Finbow Department of Mathematics and Computing Science Saint Mary s University Halifax, Canada BH C art.finbow@stmarys.ca Douglas F. Rall Department

More information

Solving Traveling Salesman Problem Using Parallel Genetic. Algorithm and Simulated Annealing

Solving Traveling Salesman Problem Using Parallel Genetic. Algorithm and Simulated Annealing Solving Traveling Salesman Problem Using Parallel Genetic Algorithm and Simulated Annealing Fan Yang May 18, 2010 Abstract The traveling salesman problem (TSP) is to find a tour of a given number of cities

More information

Principles of Optimization Techniques to Combinatorial Optimization Problems and Decomposition [1]

Principles of Optimization Techniques to Combinatorial Optimization Problems and Decomposition [1] International Journal of scientific research and management (IJSRM) Volume 3 Issue 4 Pages 2582-2588 2015 \ Website: www.ijsrm.in ISSN (e): 2321-3418 Principles of Optimization Techniques to Combinatorial

More information

New Trials on Test Data Generation: Analysis of Test Data Space and Design of Improved Algorithm

New Trials on Test Data Generation: Analysis of Test Data Space and Design of Improved Algorithm New Trials on Test Data Generation: Analysis of Test Data Space and Design of Improved Algorithm So-Yeong Jeon 1 and Yong-Hyuk Kim 2,* 1 Department of Computer Science, Korea Advanced Institute of Science

More information

LECTURES 3 and 4: Flows and Matchings

LECTURES 3 and 4: Flows and Matchings LECTURES 3 and 4: Flows and Matchings 1 Max Flow MAX FLOW (SP). Instance: Directed graph N = (V,A), two nodes s,t V, and capacities on the arcs c : A R +. A flow is a set of numbers on the arcs such that

More information

S. Ashok kumar Sr. Lecturer, Dept of CA Sasurie College of Engineering Vijayamangalam, Tirupur (Dt), Tamil Nadu, India

S. Ashok kumar Sr. Lecturer, Dept of CA Sasurie College of Engineering Vijayamangalam, Tirupur (Dt), Tamil Nadu, India Neural Network Implementation for Integer Linear Programming Problem G.M. Nasira Professor & Vice Principal Sasurie College of Engineering Viyayamangalam, Tirupur (Dt), S. Ashok kumar Sr. Lecturer, Dept

More information

CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION

CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION 131 CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION 6.1 INTRODUCTION The Orthogonal arrays are helpful in guiding the heuristic algorithms to obtain a good solution when applied to NP-hard problems. This

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

(b) Linking and dynamic graph t=

(b) Linking and dynamic graph t= 1 (a) (b) (c) 2 2 2 1 1 1 6 3 4 5 6 3 4 5 6 3 4 5 7 7 7 Supplementary Figure 1: Controlling a directed tree of seven nodes. To control the whole network we need at least 3 driver nodes, which can be either

More information

A Connection between Network Coding and. Convolutional Codes

A Connection between Network Coding and. Convolutional Codes A Connection between Network Coding and 1 Convolutional Codes Christina Fragouli, Emina Soljanin christina.fragouli@epfl.ch, emina@lucent.com Abstract The min-cut, max-flow theorem states that a source

More information

Principles of Wireless Sensor Networks. Fast-Lipschitz Optimization

Principles of Wireless Sensor Networks. Fast-Lipschitz Optimization http://www.ee.kth.se/~carlofi/teaching/pwsn-2011/wsn_course.shtml Lecture 5 Stockholm, October 14, 2011 Fast-Lipschitz Optimization Royal Institute of Technology - KTH Stockholm, Sweden e-mail: carlofi@kth.se

More information

Using graph theoretic measures to predict the performance of associative memory models

Using graph theoretic measures to predict the performance of associative memory models Using graph theoretic measures to predict the performance of associative memory models Lee Calcraft, Rod Adams, Weiliang Chen and Neil Davey School of Computer Science, University of Hertfordshire College

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

Particle Swarm Optimization applied to Pattern Recognition

Particle Swarm Optimization applied to Pattern Recognition Particle Swarm Optimization applied to Pattern Recognition by Abel Mengistu Advisor: Dr. Raheel Ahmad CS Senior Research 2011 Manchester College May, 2011-1 - Table of Contents Introduction... - 3 - Objectives...

More information

Using Genetic Algorithms to Solve the Box Stacking Problem

Using Genetic Algorithms to Solve the Box Stacking Problem Using Genetic Algorithms to Solve the Box Stacking Problem Jenniffer Estrada, Kris Lee, Ryan Edgar October 7th, 2010 Abstract The box stacking or strip stacking problem is exceedingly difficult to solve

More information

METAHEURISTICS. Introduction. Introduction. Nature of metaheuristics. Local improvement procedure. Example: objective function

METAHEURISTICS. Introduction. Introduction. Nature of metaheuristics. Local improvement procedure. Example: objective function Introduction METAHEURISTICS Some problems are so complicated that are not possible to solve for an optimal solution. In these problems, it is still important to find a good feasible solution close to the

More information