Optimization Problems Under One-sided (max, min)-linear Equality Constraints

Similar documents
Lecture 15: The subspace topology, Closed sets

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

On Unbounded Tolerable Solution Sets

ROUGH MEMBERSHIP FUNCTIONS: A TOOL FOR REASONING WITH UNCERTAINTY

Lecture 19 Subgradient Methods. November 5, 2008

Math 5593 Linear Programming Lecture Notes

ORIE 6300 Mathematical Programming I September 2, Lecture 3

Lecture 2 - Introduction to Polytopes

AN APPROXIMATION APPROACH FOR RANKING FUZZY NUMBERS BASED ON WEIGHTED INTERVAL - VALUE 1.INTRODUCTION

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

1 Elementary number theory

Cuts from Proofs: A Complete and Practical Technique for Solving Linear Inequalities over Integers

ON SWELL COLORED COMPLETE GRAPHS

Monotone Paths in Geometric Triangulations

A PARAMETRIC SIMPLEX METHOD FOR OPTIMIZING A LINEAR FUNCTION OVER THE EFFICIENT SET OF A BICRITERIA LINEAR PROBLEM. 1.

COMPENDIOUS LEXICOGRAPHIC METHOD FOR MULTI-OBJECTIVE OPTIMIZATION. Ivan P. Stanimirović. 1. Introduction

The Set-Open topology

3 No-Wait Job Shops with Variable Processing Times

Reverse Polish notation in constructing the algorithm for polygon triangulation

Generalized Convex Set-Valued Maps

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh

Method and Algorithm for solving the Bicriterion Network Problem

SEQUENCES, MATHEMATICAL INDUCTION, AND RECURSION

PANCYCLICITY WHEN EACH CYCLE CONTAINS k CHORDS

INTERSECTION OF CURVES FACTS, COMPUTATIONS, APPLICATIONS IN BLOWUP*

THREE LECTURES ON BASIC TOPOLOGY. 1. Basic notions.

Computing intersections in a set of line segments: the Bentley-Ottmann algorithm

MATH3016: OPTIMIZATION

Preprint Stephan Dempe, Alina Ruziyeva The Karush-Kuhn-Tucker optimality conditions in fuzzy optimization ISSN

Introduction to Mathematical Programming IE406. Lecture 4. Dr. Ted Ralphs

Discrete Applied Mathematics

A NOTE ON THE ASSOCIATED PRIMES OF THE THIRD POWER OF THE COVER IDEAL

Boolean networks, local models, and finite polynomial dynamical systems

EC 521 MATHEMATICAL METHODS FOR ECONOMICS. Lecture 2: Convex Sets

Lecture 5: Duality Theory

Open and Closed Sets

Module 7. Independent sets, coverings. and matchings. Contents

A TRANSITION FROM TWO-PERSON ZERO-SUM GAMES TO COOPERATIVE GAMES WITH FUZZY PAYOFFS

Linear Programming in Small Dimensions

REGULAR GRAPHS OF GIVEN GIRTH. Contents

Some Properties of Intuitionistic. (T, S)-Fuzzy Filters on. Lattice Implication Algebras

A note on the number of edges guaranteeing a C 4 in Eulerian bipartite digraphs

Discrete Optimization. Lecture Notes 2

Compact Sets. James K. Peterson. September 15, Department of Biological Sciences and Department of Mathematical Sciences Clemson University

Bounds on the signed domination number of a graph.

SEQUENCES, MATHEMATICAL INDUCTION, AND RECURSION

Finding a -regular Supergraph of Minimum Order

1. Introduction. performance of numerical methods. complexity bounds. structural convex optimization. course goals and topics

The Solution Set of Interval Linear Equations Is Homeomorphic to the Unit Cube: An Explicit Construction

Convex Geometry arising in Optimization

Primes in Classes of the Iterated Totient Function

The Encoding Complexity of Network Coding

SYSTEMS OF NONLINEAR EQUATIONS

Orientation of manifolds - definition*

arxiv: v1 [math.na] 20 Jun 2014

A step towards the Bermond-Thomassen conjecture about disjoint cycles in digraphs

Calculation of extended gcd by normalization

Framework for Design of Dynamic Programming Algorithms

1. Lecture notes on bipartite matching February 4th,

The Further Mathematics Support Programme

Lecture 2 September 3

Math Introduction to Advanced Mathematics

PROOF OF THE COLLATZ CONJECTURE KURMET SULTAN. Almaty, Kazakhstan. ORCID ACKNOWLEDGMENTS

Math 302 Introduction to Proofs via Number Theory. Robert Jewett (with small modifications by B. Ćurgus)

Sharp lower bound for the total number of matchings of graphs with given number of cut edges

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18

A SIMPLE APPROXIMATION ALGORITHM FOR NONOVERLAPPING LOCAL ALIGNMENTS (WEIGHTED INDEPENDENT SETS OF AXIS PARALLEL RECTANGLES)

Bilinear Programming

FUNCTIONS AND MODELS

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

[Ramalingam, 4(12): December 2017] ISSN DOI /zenodo Impact Factor

Geometry. Every Simplicial Polytope with at Most d + 4 Vertices Is a Quotient of a Neighborly Polytope. U. H. Kortenkamp. 1.

A Particular Type of Non-associative Algebras and Graph Theory

Discrete Mathematics Lecture 4. Harper Langston New York University

Construction of a transitive orientation using B-stable subgraphs

Progress Towards the Total Domination Game 3 4 -Conjecture

A greedy, partially optimal proof of the Heine-Borel Theorem

CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension

Lecture : Topological Space

Digital straight lines in the Khalimsky plane

Rectangular Matrix Multiplication Revisited

Scan Scheduling Specification and Analysis

CHAPTER 8. Copyright Cengage Learning. All rights reserved.

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

The Paired Assignment Problem

Network flows and Menger s theorem

Saturated Sets in Fuzzy Topological Spaces

Convexity. 1 X i is convex. = b is a hyperplane in R n, and is denoted H(p, b) i.e.,

A PACKAGE FOR DEVELOPMENT OF ALGORITHMS FOR GLOBAL OPTIMIZATION 1

Unlabeled equivalence for matroids representable over finite fields

CS321 Introduction To Numerical Methods

On Fuzzy Topological Spaces Involving Boolean Algebraic Structures

Algebraic method for Shortest Paths problems

On the Complexity of the Policy Improvement Algorithm. for Markov Decision Processes

Parameterized graph separation problems

arxiv: v1 [math.co] 15 Dec 2009

Algorithms for finding the minimum cycle mean in the weighted directed graph

CSE 20 DISCRETE MATH WINTER

Mathematically Rigorous Software Design Review of mathematical prerequisites

Lagrangian Relaxation: An overview

Transcription:

WDS'12 Proceedings of Contributed Papers, Part I, 13 19, 2012. ISBN 978-80-7378-224-5 MATFYZPRESS Optimization Problems Under One-sided (max, min)-linear Equality Constraints M. Gad Charles University, Faculty of Mathematics Physics, Prague, Czech Republic. Sohag University, Faculty of Science, Sohag, Egypt. Abstract. In this article we will consider optimization problems, the obective function of which is equal to the maximum of a finite number of continuous functions of one variable. The set of feasible solutions is described by the system of (max, min)-linear equality constraints with variables on one side. Complexity of the proposed method with monotone or unimodal functions will be studied, possible generalizations extensions of the results will be discussed. 1. Introduction The algebraic structures in which (max, +) or (max, min) replace addition multiplication of the classical linear algebra have appeared in the literature approximately since the sixties of the last century (see e.g. Butkovič Hegedüs [1984], Cuninghame-Green [1979], Cuninghame-Green Zimmermann [2001], Vorobov [1967]). A systematic theory of such algebraic structures was published probably for the first time in Cuninghame-Green [1979]. In recently appeared book Butkovič [2010] the readers can find latest results concerning theory algorithms for the (max, +) linear systems of equations. Gavalec Zimmermann [2010] proposed a polynomial method for finding the maximum solution of the (max, min)-linear system. Zimmermann Gad [2011] introduced a finite algorithm for finding an optimal solution of the optimization problems under (max, +) linear constraints. Gavalec et al. [2012] provide survey of some recent results concerning the (max, min)-linear systems of equations inequalities optimization problems under the constraints descibed by such systems of equations inequalities. Gad [2012] considered the existence of the solution of the optimization problems under two-sided (max, min)-linear inequalities constraints. Let us consider the optimization problems under one-sided (max, min)-linear equality constraints of the form: max J (a i x ) = b i, i I, which can be solved by the appropriate formulation of equivalent one-sided inequality constraints using the methods presented in Gavalec et al. [2012]. Namely, we can consider inequality systems of the form: max J (a i x ) b i, i I, max J (a i x ) b i, i I. Such systems have 2m inequalities (if I = m), which have to be taken into account. It arises an idea, whether it is not more effective to solve the problems with equality constraints directly without replacing the equality constraints with the double numbers of inequalities. In this article, we are going to propose such an approach to the equality constraints. First we will study the structure of the set of all solutions of the given system of equations with finite entries a i & b i, for all i I & J. Using some of the theorems characterizing the structures of the solution set of such systems, we will propose an algorithm, which finds an optimal solution of minimization problems with obective functions of the form: f(x) max J f (x ), where f, J are continuous functions. Complexity of the proposed method of monotone or unimodal functions f, J will be studied, possible generalizations extensions of the results will be discussed. 2. One-sided (max, min)-linear Systems of Equations Let us introduce the following notations: J = {1,..., n}, I = {1,..., m}, where n m are integer numbers, R = (, ), R = [, ], R n = R R (n-times), similarly R n = R R, x = (x 1,..., x n ) R n, α β = min{α, β}, α β = max{α, β} for any α, β R, we set per definition =, 13

=, a i R, b i R, i I, J are given finite numbers, consider the following system of equations: max (a i x ) = b i, i I, (1) J x x x. (2) The set of all solutions of the system (1), will be denoted M =. Before investigating properties of the set M =, we will bring an example, which shows one possible application, which leads to solving this system. Example 2.1 Let us assume that m places i I {1, 2,..., m} are connected with n places J {1, 2,..., n} by roads with given capacities. The capacity of the road connecting place i with place is equal to a i R. We have to extend for all i I, J the road between i by a road connecting with a terminal place T choose an appropriate capacity x for this road. If a capacity x is chosen, then the capacity of the road from i to T via is equal to a i x = min(a i, x ). We require that the maximum capacity of the roads connecting i to T via over = 1, 2,..., n, which is described by max J (a i x ), is equal to a given number b i R the chosen capacity x lies in a given finite interval i.e. x [x, x ], where x, x R are given finite numbers. Therefore feasible vectors of capacities x = (x 1, x 2,..., x n ) (i.e. the vectors, the components of which are capacities x having the required properties) must satisfy the system (1) (2). In what follows, we will investigate some properties of the set M = described by the system (1). Also, to simplify the formulas in what follows we will set a i (x) max J (a i x ) i I. Let us define for any fixed J the set I = {i I & a i b i }, S (x ) = {k I & a k x = b k }, define the set M = (x) = {x M = & x x}, Lemma 2.1 Let us set for all i I J T = i = {x ; (a i x ) = b i & x x } Then for any fixed i, the following equalities hold: (i) T = i = {b i} if a i > b i & b i x ; (ii) T = i = [b i, x ] if a i = b i & b i x ; (iii) T = i = if either a i < b i or b i > x. Proof: (i) If a i > b i, then a i x > b i for any x > b i, also a i x < b i for any x < b i, so that the only solution for equation a i x = b i is x = b i x. (ii) If a i = b i & b i x, then a i x < b i for arbitrary x < b i, but a i x = b i for arbitrary b i x x. (iii) If a i < b i, then either a i x = a i < b i for arbitrary x a i, or a i x = x < b i for arbitrary x < a i. Therefore there is no solution for equation a i x = b i, which means T i = =. Also if b i > x, then either x b i > x, so that T i = =, or x < b i there are two cases, the first is x a i a i x = x < b i the second case is x > a i a i x = a i < x < b i. therefore there is no solution for equation a i x = b i, which means T i = =. 14

Lemma 2.2 Let us set for all i I J let Then we have x (i) = GAD: OPTIMIZATION PROBLEMS { b i if a i > b i & b i x, x if a i = b i & b i x or T = i =, ˆx = S (ˆx ) = the following statements hold: (i) ˆx M = (x) J S (ˆx ) = I { min k I x (k) if I, x if I =, { k I ; x (k) = ˆx }, J, (ii) Let M = (x), then ˆx M = (x) for any x M = (x) x ˆx, i.e. ˆx is the maximum element of M = (x). Proof: (i) To prove the necessary condition we suppose ˆx M = (x), then max J (a i ˆx ) = b i, for all i I. So that for all i I, there exists at least one (i) J such that a i(i) ˆx (i) = max J (a i ˆx ) = b i, then either ˆx (i) = b i, if a i(i) > b i & b i x (i), or ˆx (i) > b i, if a i(i) = b i & b i x (i), so that i I (i) ˆx (i) = x (i) (i). Otherwise a ik ˆx k < b i, for all k (i) & k J, then ˆx k < b i, a ik < b i, I k =, therefore we can choose ˆx k = x k. Hence for all i I there exists at least one (i) J such that S (i) (ˆx (i) ) i S (i) (ˆx (i) ) J S (ˆx ) = I. To prove the sufficient condition, let J S (ˆx ) = I, then for all i I there exists at least one (i) J such that S (i) (ˆx (i) ) i S (i) (ˆx (i) ). Therefore ˆx (i) = b i if a i(i) > b i & b i x (i), then a i(i) ˆx (i) = ˆx (i) = b i. Otherwise ˆx (i) = x (i) if either a i(i) = b i & b i x (i), then a i(i) ˆx (i) = a i(i) = b i or T i(i) = =. Then for all i I there exists at least one (i) J such that max J (a i ˆx ) = a i(i) ˆx (i) = b i. Then ˆx M = (x). (ii) Let M = (x), then for each i I, there exists at least one (i) J such that T i(i) =, b i x (i) & a i(i) b i. Therefore there exists at least one (i) J such that either a i(i) > b i & b i x (i), then x (i) (i) = b i. Or a i(i) = b i & b i x (i), then x (i) (i) = x (i) so that ˆx (i) = b i if i I (i) a i(i) ˆx (i) = b i is satisfied. Otherwise if I =, we set ˆx = x. Then ˆx M = (x) for any x M = (x) we have x ˆx, i.e. ˆx is the maximum element of M = (x). It is appropriate now to define M = (x, x) = {x M = (x) & x x}, which is the set of all solutions of the system described by (1) (2). Theorem 2.1 Let ˆx S (ˆx ) be defined as in Lemma 2.2 then: (i) M = (x, x) if only if ˆx M = (x) & x ˆx, (ii) If M = (x, x), then ˆx is the maximum element of M = (x, x), (iii) Let M = (x, x) J J. Let us set x = ˆx if J, otherwise x = x. Then x M = (x, x) J S ( x ) = I Proof: 15

(i) If ˆx M = (x) & x ˆx, from definition M = (x, x) we have ˆx M = (x, x), then M = (x, x). If M = (x, x) so that M = (x) from lemma 2.2, it is verified that ˆx M = (x) ˆx is the maximum element of M = (x), therefore ˆx x. (ii) If M = (x, x), so that M = (x) M = (x, x) M = (x), since ˆx is the maximum element of M = (x), then ˆx is the maximum element of M = (x, x). (iii) Let x M = (x, x) x M = (x), also from definition x we have x = ˆx for all J, so that S ( x ) J. And x < ˆx for all J \ J, so that S ( x ) = J \ J. Hence x M = (x), by lemma 2.2 we have J S ( x ) = I. Let J S ( x ) = I, since J J we have J S ( x ) = I. By lemma 2.2, also we have x x therefore x M = (x, x). 3. Optimization Problems Under One-sided (max, min)-linear Equality Constraints In this section we will solve the following optimization problem: subect to x M = (x) f(x) max J f (x ) min (3) x M = (x, x) (4) We assume further that f : R R are continuous monotone functions (i.e. increasing or decreasing), M = (x, x) denotes the set of all feasible solutions of the system described by (1) (2) assuming that M = (x, x) (note that the emptiness of the set M = (x, x) can be verified using the considerations of the preceding section). Let J { f decreasing function} so that min (x ) = f (ˆx ), x [x,ˆx ] J. Then we can propose an algorithm for solving problem (3) (4) under the assumption that M = (x, x) which means J S (ˆx ) = I, i.e. for finding an optimal solution x opt of problem (3) (4). Algorithm 3.1 We will provide algorithm, which summarizes the above discussion finds an optimal solution x opt of problem (3) (4), where f (x ) are continuous monotone functions. 0 Input I, J, x, x, a i b i for all i I J; 1 Find ˆx, set x = ˆx, 2 Find J { f decreasing function} 3 F = {p max J f ( x ) = f p ( x p )} 4 If F J, then x opt = x, Stop. 5 Set y p = x p p F, & y = x, otherwise 6 If J S (y ) = I, set x = y go to 3 7 x opt = x, Stop. We will illustrate the performance of this algorithm by the following numerical example. 16

Example 3.1 Consider the optimization problem (3), where f (x ) J are continuous monotone functions in the form f (x ) c x + d, C = [ 0.2057 4.8742 2.8848 0.9861 1.7238 1.1737 3.3199 ] D = [ 1.4510 1.5346 3.6121 0.9143 2.0145 1.9373 4.8467 ] subect to x M = (x, x), where the set M = (x, x) is given by the system (1) (2) where J = {1, 2,..., 7}, I = {1, 2,..., 6}, x = 0 J x = 10 J consider the system (1) of equations where a i & b i i I J are given by the matrix A vector B as follows: 6.1221 9.0983 9.5032 6.0123 6.1112 4.1221 5.5776 8.2984 3.3920 2.5185 1.1925 8.9742 6.7594 8.6777 A = 2.0115 6.3539 4.4317 7.7452 0.6465 9.4098 1.3576 6.4355 1.6404 3.1850 3.7361 7.2605 3.0201 5.3808 8.5668 5.8310 2.5146 8.7804 3.7709 4.4770 2.3007 5.2690 9.6900 5.1598 9.2889 6.1585 1.0786 7.0121 B T = [ 6.1221 7.0955 6.3539 6.4355 6.5712 7.0121 ] By the method in section 2 we get ˆx, which is the maximum element of M = (x, x), as follows: ˆx = (6.5712, 6.1221, 6.1221, 6.3539, 6.4355, 6.3539, 7.0955) By using algorithm 3.1 after five iterations we find that if we set x 4 = 0 the third equation of the system (1) does not satisfy, therefore ALGORITHM 1 go to step 8 take x opt = x = (6.5712, 0, 0, 6.3539, 0, 0, 7.0955) stop. We obtained the optimal value for the obective function f(x opt ) = 5.3510. We can easily verify that x opt is a feasible solution. Remark 3.1 By reference to the lemma 2.2 theorem 2.1 it is not difficult to note that the maximum number of arithmetic or logic operations in any step to get ˆx can not exceed n m operations. This will happen when we calculate x (i), i I & J. Also from the above example we can remark that the maximum number of operations in each step in any iterations from algorithm 3.1 is less than or equal to the number of variables n the maximum number of iterations from step 3 to step 6 of this algorithm can not exceed n. Therefore the computational complexity of the algorithm 3.1 is O(max(n 2, n m)). In what follows let us modify algorithm 3.1, to be suitable to find an optimal solution for any general continuous functions f (x ) as follows: Algorithm 3.2 We will provide algorithm, which summarizes the above discussion find an optimal solution x opt of problem (3) (4), where f (x ) are general continuous functions. 0 Input I, J, x, x, a i b i for all i I J; 1 Find ˆx, set x = ˆx, 2 Find min x [x,ˆx ] f (x ) = f (x ), J. 3 Set J { f (ˆx ) = f (x )} 4 F = {p max J f ( x ) = f p ( x p )} 5 If F J, then x opt = x, Stop. 17

6 Set y p = x p p F, & y = x, otherwise 7 If J S (y ) = I, set x = y go to 3 8 x opt = x, Stop. We will illustrate the performance of this algorithm by the following numerical examples. Example 3.2 Consider the optimization problem (3), where f (x ) J are continuous functions given in the following form f (x ) (x ξ ) 2, ξ = (3.3529, 1.4656, 5.6084, 5.6532, 6.1536, 6.5893) subect to x M = (x, x), where the set M = (x, x) is given by the system (1) (2) where J = {1, 2,..., 6}, I = {1, 2,..., 6}, x = 0 J x = 10 J consider the system (1) of equations where a i & b i i I J are given by the matrix A vector B as follows: 3.6940 0.8740 0.5518 4.6963 2.1230 1.4673 1.9585 8.3470 5.8150 8.5545 8.9532 8.7031 A = 1.3207 8.9610 1.5718 3.7155 0.1555 4.3611 8.4664 9.1324 6.6594 2.5637 6.0204 6.0846 2.4219 9.6081 1.9312 2.5218 1.3976 4.1969 1.1172 3.6992 7.5108 4.7686 4.4845 4.3301 B T = [ 4.0195 7.2296 4.2766 6.6594 4.1969 6.9874 ] By the method in section 2 we get ˆx, which is the maximum element of M = (x, x), as follows: ˆx = (6.6594, 4.1969, 6.9874, 4.0195, 7.2296, 4.2766) By using algorithm 3.2 after three iterations we find that x opt = (3.3297, 1.4689, 6.9874, 4.0195, 7.2296, 4.2766 Here we find the algorithm 3.2 stop in step 5 since the active variable in iteration 3 is x 6, at the same time the obective function has the minimum value in this value of x 6, so that the algorithm 3.2 stop. Then we obtained the optimal value of the obective function, f(x opt ) = 5.3488. It is easy to verify that x opt is a feasible solution. Example 3.3 Consider the optimization problem (3), where f (x ) J are continuous functions given in the following form f (x ) (x ξ )(x ), where ξ = (3.3529, 1.4656, 5.6084, 5.6532, 6.1536, 6.5893) = (0.7399, 0.1385, 4.1585, 1.1625, 2.1088, 1.2852) subect to x M = (x, x), where the set M = (x, x) is given in the same way as in example 3.2 By the method in section 2 we get ˆx, which is the maximum element of M = (x, x), as follows: ˆx = (6.6594, 4.1969, 6.9874, 4.0195, 7.2296, 4.2766) By using algorithm 3.2 we find: Iteration 1: 1 x = (6.6594, 4.1969, 6.9874, 4.0195, 7.2296, 4.2766); 2 x = 0.7325, 1.4689, 5.5899, 1.1656, 6.1452, 1.2830; 3 J = ; 4 F = {1}; f ( x) = 19.5728; 6 y = (0.7325, 4.1969, 6.9874, 4.0195, 7.2296, 4.2766); 7 J S (y ) = I; 18

x = (0.7325, 4.1969, 6.9874, 4.0195, 7.2296, 4.2766). Iteration 2: 3 J = {1}; 4 F = {3}; f ( x) = 15.3692; 6 y = (0.7325, 4.1969, 5.5899, 4.0195, 7.2296, 4.2766); 7 J S (y ) = {1, 2, 3, 5} I; 8 x opt = (0.7325, 4.1969, 6.9874, 4.0195, 7.2296, 4.2766), STOP. In Iteration 2 algorithm 3.2 stops since the fourth sixth equations in the system of equations (1) can not be verify if we change the value of the variable y 3 to be y 3 = (5.5899). If it is necessary to change this value of the variable y 3 to minimize the obective function so that our exhortation to decision-maker, it must be changed both the capacities of the ways a 45 a 65 to be a 45 = 6.6594 a 65 = 6.9874 in order to maintain the verification of the system of equations (1). In this case we can complete the operation to minimize the obective function, after Iteration 4 we obtained x opt = (0.7325, 1.4689, 5.5899, 4.0195, 7.2296, 4.2766); so that the optimal value f(x opt ) = 10.0481, we can easily verify that x opt is a feasible solution. Remark 3.2 Step 2 in algorithm 3.2 depends on the method, which finds the minimum for each function f (x ) in the interval [x, ˆx ], but this appears in the first iteration only only once. Also from the above examples we can remark that the maximum number of operations in each step in any iterations from algorithm 3.2 is less than or equal to the number of variables n the maximum number of iterations from step 3 to step 7 of this algorithm{ can not exceed n. Therefore } the computational complexity of the algorithm 3.2 is given by max O(max(n 2, n m)), Õ, where Õ is complexity of the method, which finds the minimum for each function f (x ) in the interval [x, ˆx ]. Acknowledgments. The author offers sincere thanks to Prof. Karel Zimmermann for his continuing support the great advice forever. References Butkovič, P., Hegedüs, G.: An Elimination Method for Finding All Solutions of the System of Linear Equations over an Extremal Algebra, Ekonomicko matematický obzor 20, 203 215, 1984. Butkovič, P.: Max-linear Systems: Theory Algorithms, Springer Monographs in Mathematics, 267 p, Springer-Verlag, London 2010. Cuninghame-Green, R. A.: Minimax Algebra. Lecture Notes in Economics Mathematical Systems 166, Springer-Verlag, Berlin 1979. Cuninghame-Green, R. A., Zimmermann, K.: Equation with Residual Functions, Comment. Math. Univ. Carolinae 42, 729 740, 2001. Gad, M.: optimization problems under two-sided (max, min)-linear inequalities constraints(to appear). Gavalec, M., Zimmermann, K.: Solving Systems of Two-Sided (max, min)-linear Equations, Kybernetika 46, 405 414, 2010. Gavalec, M., Gad, M., Zimmermann, K.:Optimization problems under (max, min)-linear equation /or inequality constraints (to appear). Vorobov, N. N.: Extremal Algebra of positive Matrices, Datenverarbeitung und Kybernetik 3, 39 71, 1967 (in Russian). Zimmermann, K., Gad, M.: Optimization Problems under (max, +) linear Constraints,International conference presentation of mathematics 11 (ICPC), 11, 159-165, 2011. 19