SOLVING BOUNDED VARIABLE LFP PROBLEMS BY CONVERTING IT INTO A SINGLE LP
|
|
- Oswin Osborne
- 5 years ago
- Views:
Transcription
1 SOLVING BOUNDED VARIABLE LFP PROBLEMS BY CONVERTING IT INTO A SINGLE LP H. K. Das *, M. Babul Hasan ** and A. Islam ** Abstract: In this paper, we introduce a computer-oriented technique for solving Linear Fractional Bounded Variables (LFBV) problem by converting it into a single Linear programming (LP) problem. We develop this computer technique using programming language MATHEMATICA. This technique also shows all the solutions step by step. We demonstrate our technique by illustrating a number of numerical examples. We also study the comparative effectiveness of some existing algorithms. From the comparativeness, we have showed that if the denominator + < 0 then most of the authors failed to solve LFP and LFBV problems. In the similar situation our method is able to solve LFP and LFBV problems. Finally, we compare our technique with other existing techniques for solving LFP & LFBV problems. Keywords: LP, LFP, Bounded Variable, Unbounded Variable, Computer Algebra, Mathematica. INTRODUCTION Linear Fractional Bounded Variables (LFBV) problems have attracted considerable research and interests, since they are useful in production management, financial and corporate planning, health care and hospital planning. Nowadays Linear Fractional Programming (LFP) criterion is frequently encountered in business and economics such as: Min [debt-to-equity ratio], Max [return on investment], Min [Risk asset to Capital], Max [Actual capital to required capital] etc. When we consider the real-world applications of LFP, it may occur that one or more unknown variables not only have a non-negativity constraint but are constrained by lower or upper-bounds constraints. These problems are called Linear Fractional Bounded Variable (LFBV) programming. So the importance of LFP & LFBV problems is evident. The field of LFP, largely developed by Hungarian mathematician B. Martors [9, 0] and his associates in the 960 s, is concerned with problem of optimization. Actually LFP and LFBV problems deal with that class of mathematical programming problems in which the relation among the variables are linear: the constraint relations must be in linear form and the objective function to be optimized must be a ratio of two linear functions. Several methods to solve LFP and LFBV problems are in use. In 96, Charnes and Cooper [3] developed a method depending on the transformation of LFP to equivalent LPs. Bitran and Novaes [] solve the LFP by solving a sequence of linear programs only re-computing the local gradient of the objective function. Also in 976 some aspects concerning duality and sensitivity analysis in LFP was discussed by Bitran and Magnant []. In 98 Singh C. in his paper made a useful study about the optimality condition in LFP. Swarup [6] extended the usual simplex method of Dantzig [5] for solving LFP problems. Tantawy [] developed a technique with the dual solution. Hasan & Achargee [4] developed a method for solving LFP with the denominator + > 0. The above mentioned articles deal with variables of 0 type. For solving LFBV, the above mentioned methods may fail. Besides this, we face a large number of LFVB problems in our real life. This type of problems also discussed in Das & Hasan [5] and Bajalinov []. But the existing Bajalinov [] method is more laborious. Bajalinov observed that, because of the increased size of the problem obtained, their approach is computationally undesirable. It can therefore both unrealistic and destructive to expect the different * Department of Mathematics and Statistics, Concordia University, Montréal, Quebec, H3GM8 Canada, hkdas.rohit@gmail.com ** Department of Mathematics, University of Dhaka, Dhaka-000, Bangladesh, s: mbabulhasan@yahoo.com, asrafuldu6@gmail.com Vol. 5, No., July-December 03 89
2 H. K. Das, M. B. Hasan and A. Islam method for solving the LFBV be justified based on theoretical consideration alone. For this reason, our aim is to find a procedure which takes less computational effort. Also the choice of different methods for solving LFP & LFBV was guided by the need to develop a technique that is simple but robust enough to solve a number of practical problems. Finally we propose a method which appears to be less sensitive to the problem size. The suggested method in this paper solves LFBV problems, where the constraint functions are in the form of linear inequalities and the variables are bounded. Since the earlier methods based on index partitioning (Erik B. Bajalinov []) may have difficulties as the problem size increases. But we will observe that our method appears to be less sensitive to the problem size. At the beginning of our technique, we convert LFP bounded variable into LFP problems and then convert it into single LP problems. Later, we develop a computer technique to implement this method by using programming language MATHEMATICA. We compare our method with other well-known methods for solving LFBV problems. In the following section, we briefly discuss about the basic definitions relevant to this article. Bounded Variable LP In production facilities lower and upper bounds can represent the minimum and maximum demands for certain products. Define the upper bounded LP models as Maximize z = {CX (A, I) X = b, L X U, U L } The bounded algorithm us es only the constraints (A, I)X = b, X explicitly, while accounting for L X U implicitly through modification of the simplex feasibility condition. Linear Fractional Bounded Variables (LFBV) Consider the following problem, defined as LFBV Max : Q( x) n P( x) p x p D( x) d x d subject to : a x b, i j ij j i n j j j 0 n j j j 0,,..., m where l j x j u j, u j l j, j =,, , n Relation between Bounded Variable LP andlfp Problems In this section, we establish the relationship between LP and LFP problems. The mathematical form of a Bounded variable LP is as follows: Max(Min) : Z = cx (.) subject to : Ax = b (.) l j x j u j, u j l j, (.3) j =,, , n b (.4) Where A = (a, a,......, a m, a m+,... a n ) is an m n matrix, b m x, c n, x is a (n ) column vector, and c is a ( n) row vector. And the mathematical formulation of LFP with BV is as follows: cx Max( Min) : Z subject to : Ax, =, b l j x j u j, u j l j, j =,, , n Where A = (a, a,......, a m, a m+,... a n ) is a m n matrix, b m, x, c, d n,, After initial converting the above system becomes into LFP: cx Max( Min) : Z (.5) subject to : Ax, =, b (.6) x (.7) It is assumed that the feasible region S = {x n : Ax b, x } is nonempty and bounded and the denominator +. Now if d = 0 and =, then the LFP in (.5) to (.7) becomes an LP problem. That is (.5) can be written as: Max(Min) : Z = (cx + ) subject to : Ax b; x This is why we say that LFP is a generalization of an LP in (.) to (.3). There are also a few cases when the LFP can be replaced with an appropriate LP. The main case is discussed as follows: 90 I J D A I S Serials Publications
3 If c = (c c, c n ), d = (d, d, d n ) are linearly dependent, there exists µ such that c = µd, then Z = µ µ µ (i) If µ = 0, then Z = is a constant. (ii) If µ > 0 or µ < 0 i.e., if µ then Z becomes a linear function. Therefore the LFP becomes an LP with the same feasible region S. If c and d 0 then one has to find a new way to convert the LFP into an LP. Assuming that the feasible region S = {x n : Ax b, x } (.8) is nonempty and bounded and the denominator + > 0. We develop a method which converts an LFP of this type to an LP. The rest of the paper is organized as follows. In Section, we discuss the some existing methods such as Charnes & Coopers method [3], Dorn s dual type method [5], Bitran & Novaes method [], Swarup s method [44] and Bounded variable of LP&LFP. In Section 3, we discuss our method for solving LFBV problems. In Section 4, we make a comparison between our method and other relevant methods. We also give time comparison chart and discuss merits & demerits considered in this research. Finally in the appendix-a, we also study the comparative effectiveness of some existing algorithms and highlighted the limitations of different methods by some test problems. In the appendix-c we show the input output system of our computer technique. EXISTING METHODS In this section, we briefly discuss most relevant existing research articles Charnes & Cooper transformation method, Swarup s simplex type method, Dorn s dual simplex type method, Harvey M. Wagner and John S. C. Yuan & Bounded variable of LFP(Das & Hasan[5]) one by one and then show the effectiveness of these methods. For this reason, we classifying the LFP (.5), (.6) and (.7) into three special cases & they are as follows: Type I: The program has a finite optimum at a finite point. Type II: The program has an infinite optimum at a finite point. Type III: The program has an infinite optimum at an infinite point. Charnes and Coopers Method [3] In 96 Charnes-Cooper [3] considered the LFP problem defined by (.5), (.6) and (.7) and the authors then used the variable transformation y = tx, t in such a way that d T x + =, where is a specified number and transform LFP to an LP problem. Finally, they obtain two equivalent LP problems and named them as EP and EN as follows. Table Transformation LFP of Charnes and Coopers (EP) Max: L(y,t) cy + t (EN) Max : cy t s. t. Ay bt s. t. Ay bt 0 dy + t = ; y, t dy t = ; y, t 0 Lemma..: Every (y, t) satisfying the constraints of (EP) has t > 0. Theorem..: If (V) 0 <sgn ( ) = sgn(d T x + ) for x * an optimal solution of (LFP) and (ii) (y *, t * ) is an optimal solution of (EP) then y * / t * is an optimal solution of (LFP). Lemma..3: It should be noted that same reduction can be made using the numerator instead T c x of the denominator since max T d x T d x max T c x Theorem..4: The following corresponding statements are equivalent: Theorem..5: If for all x X, d T x + = 0 then problem (EP) and (EN) are both inconsistent. Bitran and Novae s Method [] In this section, we briefly discuss the method of Bitran & Novaes []. The author consideredthelfp problem defined by (.) to (.3) assumed that the constraint set is nonempty and bounded and the denominatorfor all feasible solutions.also the process moves along in the direction of the gradient and this means that the feasible solutions are always upgraded. Vol. 5, No., July-December 03 9
4 H. K. Das, M. B. Hasan and A. Islam Linear Fractional Table Relation of Equivalency between LP & LFP Linear Programming (i) All x X satisfy c T x + = d T x + = 0 (i) (EP) and (EN) are inconsistent. (ii) There exist x n such that R(x n ) max R(x) with d T x n + = n 0, n 0 (ii) t * in (EP) or (EN) involves the artificial bound U. (iii) R(x * ) = max R(x) with d T x + 0 (iii) x * = y * / t * from (EP) or (EN) Swarup smethod [6] In this section, we discuss about Swarup s simplex type method. Swarup directly deals with LFP and one needs to compute j = Z (c j Z j ) Z (d j Z j ) with three cases. Also this method replacing one basic variable by one non basic variable at a time so long as to there is some in A not in B with j > 0 and at each step Z is increased. Dorn s Dual Type Method [5] In this section, we discuss about Dorn s dual type method. The algorithm is a direct generalization of Lemke s dual simplex method in 964 for linear programs. Also the following results are established by Dorn in 96. Theorem..6: Local maximum of a linear fractional program is a global maximum. Theorem..7: If a linear fractional program is of Type I, then the maximum occurs at an extreme point of the convex polyhedron S. Theorem..8: If a linear fractional program is Type I, then necessary and sufficient conditions that x be a solution to the program are that there exist m scalars v i (i =,, m) such that a x b, i,,..., m; i m i i m f ( x) v a ; i i i v ( a x b ) 0; v 0, i,, , m i i i i Where is the gradient vector Hasan & Acharjee Method (0) In this section, we briefly discuss Hasan & Acharjee method for solving LFP problems. For this, they assumed that the feasible region S = {x n : Ax b, x } is nonempty and bounded and the denominator +. If + < 0, thensolution to the LFP cannot be found by their method. Existing Methods of Bounded Variable LP & LFP In this section, we briefly discuss the existing method of LFBV. There are two methods for solving LFBV. One was developed by Bajalinov [] and the other was developed by Das & Hasan [5]. Here, we discuss the general LP bounded variable problems. Consider the following LP problem, Maximize : Z = CX subjectto : (A, I) X = b & L X U U l l l u n m u u n m and L, U L 0 The elements of L and U for an unbounded variable are 0 and. The problem can be solved by the regular simplex method, in which case the constraints are put in the form (A, I)X = b X + X = U, X X = L, X, X, X Where X and X are slack and surplus variables. This problem includes 3(m+n) variables and 3m+n constraints equations. From the above discussion of the some existing methods, one can observe the limitations and clumsiness of these methods. We also show the limitations and clumsiness of those methods by 9 I J D A I S Serials Publications
5 numerical experiments in appendix A. In the next section, we will develop a sophisticated method for solving LFBV problems. OUR METHOD FOR SOLVING LFBV In this section, we develop a sophisticated method for solving both LFP and LFBV problems. For this, we assume that the feasible region S = {x n : Ax b, x ) is nonempty and bounded and the denominator is + > 0 & + < 0 then the condition, ( Ax b) (,, )0 ( ) Hold. As a result solution to the LFP & LFBV can be found by our method. On the other hand, Hasan & Archarjee [4] failed to solve the LFP with + < 0, because the condition ( Ax b) ( ) 0 does not hold. Joshi, Singh & Gupta [4] developed a method for solving LFP with the denominator + > 0. In the similar situation our method is able to solve the LFP as we can see in Case in this section. Derivation of Our Method The given LFBV is as follows: Max(Min) : Z = cx subject to : Ax, =, b l j x j u j, u j l j j =,, , n Where A = (a, a,......, a m, a m+,...a n ) is a m n matrix, b m, x, c, d n,, After initial converting the above system takes the following form: Maximize : subject to Z cx : Ax, =, b x j u j, x j l j j =, , n and x Finally the LFP problem becomes: Maximize: cx (3.) subjectto: Ax, =, b, x (3.) Where A = (a, a,......, a m, a m+,... a n ) is a (m + n) n matrix. b m, x, c, d n. Now, we convert the above LFP into LP in the following way assuming that 0. Transformation of the Objective Function Multiplying both the denominator and the numerator of (3.) by we have Transformation of the Constraints Z = cx cx ( c d ) x ( ) ( ) ( ) ( ) x c d py g x Where p c d, y, and g F(y) = py + g (3.3) We develop two cases from the constraint (3.) which illustrates in the following way: Case. [4] If + > 0 we have from (3.) but [4] is failed for + < 0. A ( Ax b) (,, )0 ( ) b x b d (,, ) Gy(,, ) h (3.4) Case. If + < 0 we have from (3.), Ax b (,, )0 ( ) ( Ax b) (,, )0 ( ) Ax b b b (,, )0 ( ) b A d x b( ) (,, )0 ( ) ( ) Vol. 5, No., July-December 03 93
6 H. K. Das, M. B. Hasan and A. Islam A b x b d (,, ) Gy (, =, ) h (3.4) b x b Where A d G, y, h For the case we finally obtain the new LP form (3.4) of the given LFP as follows: (LP-) Maximize : F(y) = py + g subjectto: Gy(, =, )h; y (LP-) Maximize: subjectto: y ; F(y) = py + g Gy(, =, )h Since original decision variable x Calculation of the Unknown Variables of the LFP x From the above LP, we get y. definitionwe can get x y dy Algorithm for Solving LFBV Problems Using this (3.5) In this section, we present the algorithm to implement our method. Our method first converts the LFBV into LFP and then converts LFP into LP. Then find all the basic feasible solutions of the constraint set of the resulting LP using step by step. Then, we find all the solutions of LFP or LFBV in our technique which is proceeds as follows: Step-: Define the types of constraints. If all are of type go to Step. Otherwise go to Step 3. Step-: Substep-: Express the problem in standard form. Substep-: Start with an initial basic feasible solution in canonical form and set up theinitial table. Substep-3: Use the inner product rule to find the relative profit factors c j as follows c j = c j z j = c j (inner product of C B and the column corresponding to x j in the canonical system) Substep-4: If forall c j 0, the current basic feasible solution is optimal and stop. Then go to the step 5. Otherwise select the nonbasic variable with most positive c j to enter the basis. Substep-5: Choose the outgoing variable from the basis by minimum ratio test. Substep-6: Perform the pivot operation to get the table and basic feasible solution. Substep-7: Go to Substep-3. Step-3: Express the problem in standard form by introducing artificial variables and form the initial basic feasible solution. Go to Substep-3 Step-4: If any c j corresponding to non-basic variable is greater than zero, take this column as pivot column and go to Substep-5. Step-5: Determine all basic feasible solutions. Step-6: Calculate the values of the objective function for the basic feasible solutions found in step-5. Step-7: For the maximization of LP the maximum value of F(y) is the optimal value of the objective function and the basic feasible solution which yields the value of y. Step-8: Find the value of x using the value of y from the required formula (3.4). Step-9: Finally putting the value of x in the original LFBV, we obtain the optimal value of the LFBV with its optimal solution. Flowchart for Solving LFBV Problems In this sub section, we present a flowchart to implement our computer technique which is helpful to understand our computational procedure. Numerical Examples Example Consider the following LFP with BV problem (Das & Hasan [5]). Maximize = 5x x 0 4x x 94 I J D A I S Serials Publications
7 Figure : Flowchart of Our Computational Technique Subject to : 5 + x 3 = 0, 4 x 3 + x 4 = 4, 5, 4 x, 0 x 3 50 x 4 8 Solution using our Method Maximize = 5x x 0 4x x Subject to: 5 + x 3 = 0, 4 x 3 + x 4 = 4,, 5 4, x 3 5, x 4 8,. c = (5,,0,0) d = (4,,0,0) = 0, = A = (5,,,0) b = 0 A = (4, 0, -, ) b =4 A 3 = (, 0, 0, 0) b 3 = 5 A 4 = (-, 0, 0, 0) b 4 = A 5 = (0,, 0, 0) b 5 = A 6 = (0, -, 0, 0) b 6 = 4 A 7 = (0, 0,, 0) b 7 = 5 A 8 = (0, 0, 0, ) b 8 =8 Where A, b are related to the first constraint, A, b are related to the nd constraint, A 3, b 3 are related to the third constraint and A 8, b 8 are related to the Vol. 5, No., July-December 03 95
8 H. K. Das, M. B. Hasan and A. Islam eighth constraint. So, we have the new objective function Max: F(y) = y y y y 3 4 c d y y y (5,,0,0) (4,,0,0) Now for the first constraint we have, [(5,,, 0) + 0/ (4,, 0, 0)] y = 0/ 35 y 3 y y3 5/3 3 3 Similarly we get our eighth constraints respectively are: y y y / y y y3 y / y y y y 3 3 5/ /6 5. 4y 3y y y / y y y y 3y y 3/ 4 Hence, finally we get the new LP problem and now we will solve the above LP by simplex method So, we have y, y, y3 0, y 4. Now we get the value of x x x x x ( y, y, y, y ) (4,,0,0) (,, 3, 4 ),4,0, 3 8 Putting this value in the original objective function, we have Z The 5 detailed solution can be found in Appendix C. Example : Consider the following linear fractional bounded program (Das & Hasan[5]). Maximize x 3x 6 x 3x Subjectto + x 0, + 3x 60, 5 5, 4 x 30. Example 3 Maximize x 3x x x 5 Subjectto 3,, Solution in Our Method The converted new LP for the above problem is given by Maximize F(y) = y + 3y subject to, /5y + /5y 3/5, 4/5y + 4/5y /5, y, y Here red color points indicate the vertex points and the filled region is the solution region. By graphically the optimal vertex point is y = /4, y = 0 & z = 5/. 96 I J D A I S Serials Publications
9 Now we get the value of x ( ) =,0 4 (, ) 4 0 Example 4 Maximize (,0) x 3x Z x x 3 Subject to 3, Solution in Our Method The converted new LP for the above problem is given by Maximize F(y) = 3y + 4y subject to, /3y + /3y /3, y, y Here red color points indicate the vertex points and the filled region is the solution region. By graphically the optimal vertex point is y = /, y = 0 & z = 3/. Now we get the value of x ( ) =, 0 (, ) 0 (,0). COMPARISONS In this section, we give a time comparison chart to show the efficiency of our algorithm and it scomputer technique with the existing method and direct MATHEMATICA command. Table 3 Time Comparison Example Time use Time in Hasan Time use in No inour Program sprogram Command Example 0.4 Sec Failed Sec Example 0.33 Sec Failed 0.50 Sec Example Sec 0.4 Sec 0.44 Sec Example 6 0. Sec 0.3 Sec 0.30 Sec Example Sec 0. Sec Sec Example Sec 0.5 Sec 0.43 Sec Example Sec 0.53 Sec 0.56 Sec To find the run time of our implementation code we use TimeUsed[ ] command. We use the following computer configuration. Processor: Intel (R) Pentium(R) Dual CPU E80@.00GHZ.00GHZ, Memory(RAM):.00 GB, System type: 3- bit operating system. In the following table, we presented the comparison to show the accuracy of the different methods for solving the LFP and LFBV problems. Table 4 Accuracy in Different Methods using Test Problems Example Method Succeeds Method Fails Number Example Charnes-Copper, Bajalinov, Das & Hasan, Swarup, Dorn, Hasan & Achargee. Our method Example Charnes-Copper, Swarup Bajalinov Dorn, Our method Example 3 Our Method Hasan & Achargee, Joshi, Singh & Gupta. Example 4 Our Method Hasan & Achargee, Joshi, Singh & Gupta. Example 5 Dorn, Our method Charnes-Copper, Swarup, Bajalinov, Das & Hasan. Example 6 Dorn, Our method Charnes-Copper, Swarup, Bajalinov, Das & Hasan. Example 7 Bajalinov, Das & Charnes-Copper, Hasan, Our method Swarup, Dorn Example 8 Bajalinov, Das & Charnes-Copper, Hasan, Our method Swarup, Dorn Example 9 Charnes-Copper, Britran & Novae Swarup, Dorn, Bajalinov, Das & Hasan Our method Vol. 5, No., July-December 03 97
10 H. K. Das, M. B. Hasan and A. Islam Different Methods Charnes and Coopers Bitran and Novae s Swarup s Dorn s Different Methods Bajalinov s Hasan and Archarjee Joshi, Singh & Gupta Our Technique Table 5 Merits & Demerits in Different Methods using Test Problems Merits & Demerits In this method, one needs to solve two LPs by Two-phase or Big-M simplex method of Dantzig [5] which is lengthy, time consuming and clumsy. This method may fail for some special cases for solving LFP problems shown in appendix A. This method may also fails for the LFBV problems shown in example &. In this method, one needs to solve a sequence of problems which may need more iterations which is lengthy, time consuming and clumsy. This methodmay fail (example 9, appendix A) to recognize the solution whereas our method is successful. This method may also fail for the LFBV problems (examples & ). One needs to deal with the ratio of two linear functions for calculating the most negative cost coefficient or most positive profit factorsin each iterations which computational process is complicated. Beside this, When the constraints are not in canonical form then Swarup s method becomes more lengthy as it has to deal with two-phase simplex method with the ratio of two linear functions. This method may fail for some special cases for solving LFP problems shown in appendix A. This method may also fail for the LFBV problems (examples & ). In this method, we need to solve LFP problems which sometimes may need much iteration which is time consuming and clumsy. This method may success in most of the problems for solving LFP shown in appendix A example to 4. This method can also fail for the LFBV problems shown inappendix A example &. Merits & Demerits This method solves LFVB problems. The prescribed method may be succeeding in most of the problems solving LFBV shown in appendix A example &. This method can also be failed for the LFP problems. Hasan and Archarjee didn t show step by step by the programming technique. This technique may fail for large scale LFP problems (showed example,). Their method also fails for example 3 & 4. Hasan and Archarjee didn t discuss about LFBV problems in their method and their technique. Also they didn t show all types (inequality and equality) form in LFP and the less than denominator where as we discuss both form of LFP and LFBV inequality and equality form. Joshi, Singh & Gupta didn t show LFBV problems. They didn t show all types (inequality and equality) form and the denominator + < 0 for solving LFP problems. I if + < 0 this method fails examples 3 & 4. Our technique can solve both LFP & LFBV(shown in table-4) type of problems in step by step using less computational time(shown table-3)whereas the other methods may fail as showed in appendix A by a number of test problems. There is no technique which can solve both LFP and LFBV simultaneously. In our technique, we need to solve a single LP, which helps us to save our valuable time. Finally using the computer program, we can solve any type LFP and LFBV problems and get the optimal solution very quickly. The final result converges quickly in our technique than the other techniques (showed in table 3 and appendix A). CONCLUSION Our aim was to develop a sophisticated technique for solving LFBV problems. In this paper, we have introduced a computer oriented technique which converts LFBV problems into LFP and then converts the LFP into a single LP problem. We developed this computer technique by using programming language MATHEMATICA. We also studied the comparative effectiveness of some existing algorithms and highlighted the limitations of the existing methods. Comparing all the methods considered in this research paper for solving LFP and LFBV problems, we observed that 98 I J D A I S Serials Publications
11 our technique is better than any other methods since our computational technique is easy and shows the simplex table step by step while other techniques are clumsy. We illustrated a number of numerical examples to demonstrate our method. Acknowledgement We would like to thank three anonymous reviewers and the editor whose thoughtful comments helped to improve the celerity of the manuscript. References [] Bajalinov, E. B. (003), Linear-Fractional Programming: Theory, Methods,Applications and Software. Boston: Kluwer Academic Publishers. [] Bitran, G. R. and Novaes, A. G., (97), Linear Programming with a Fractional Objective Fundtion, University of Sao Paulo, Brazil, Vol., pp. -9. [3] Charnes, A. and Cooper, W. W., (96), Programming with Linearfractional Functions, Naval Research Logistics Quarterly, 9, pp [4] Charnes, A., Cooper, W. W., (973), An Explicit General Solution in Linear Fractional Programming, Naval Research Logistics Quarterly, Vol. 0, No. 3, pp [5] Dantzig, G. B. (96), Linear Programming and Extension, Princeton University Press, Princeton, N. J. [6] Hasan, M. B., (008), Solution of Linear Fractional Programming Problems through Computer Algebra, The Dhaka University Journal of Science, 57(), pp [7] Hasan, M. B., A. F. M. Khodadad Khan and M. Ainul Islam, (00), Basic Solutions of Linear System of Equations through Computer Algebra, J. Bangladesh Math Soc., pp. -7. [8] Harvey M. Wagner and John S. C. Yuan (968), Algorithmic Equivalence in Linear Fractional Programming, Management Science, Vol. 4, No. 5, pp [9] Martos, B., (960), Hyperbolic Programming, Publ. Mathe. Inst., Hungarian Academy of Sciences, Vol. 5(B), pp [0] Martos, B., (964) Hyperbolic Programming, Naval Research Logistics Quarterly, Vol., pp [] Swarup, K., P. K. Gupta and Man Mohan, (003), Tracts in Operation Research, Eleventh Thoroughly Revised Edition, ISBN: [] Tantawy S. F., (007), A New Method for Solving Linear Fractional Programming problems, Australian Journal of Basic and Applied Science, INSinet Pub. (), pp [3] Wolfram, S., (000), Mathematica Addision-Wesly Publishing Company, Melno Park, California, New York. [4] Hasan, M. B; SumeAchargee, Solving LFP and its Bounded Variable by Converting it into a Single Linear Program, Int. Journal of Operation Research, Vol., No. 8, pp. -4, 0. [5] Das, H. K., M Babul Hasan, A Proposed Technique for Solving Linear Fractional Bounded Variable Problems, The Dhaka University Journal of Science, Vol. 60, No., pp. 3-30, 0. [6] ErioCastagnoli, Gino Favero, On the Completeness of a Constrained Market, International Journal of Applied Management Science, Vol., No., pp. 9-96, 008. [7] Islam M. A. and Gopi Nath, Investigation on a Certain Algorithm for Linear Fractional Programming Problems, The Bangladesh Journal of Scientific Research 4(), (996), -. [8] Deepak Jain, Bhimaraya A. Metri, Vijay Aggarwal, Analytical Modelling of Multi Stage Convergent Supply Chain System Under Just-in-time, Int. J. of Applied Management Science, Vol. 3, No., pp. 0 5, 0. [9] Das H. K., T. Saha & M. Babul Hasan, Numerical Experiments By Improving a Numerical Methods for Solving Game Problems Through Computer Algebra, International Journal of Decision Sciences, (International Science Press) Vol. 3, No., pp. 3-5, 0. [0] Cigdem Z. Gurgur, Optimisation of Warranty Reserve Policies, Int. J. of Applied Management Science, Vol. 3, No. 3, pp. 4 58, 0. [] Sumant Kumar Tewari; Madhvendra Misra, A Conceptual Framework for Defining Scope and Opportunities of Interdisciplinary Research in Management, Int. J. of Applied Management Science, Vol. 4, No., pp. 03 8, 0. [] M. Punniyamoorthy, R. Murali, A Framework to Arrive at a Unique Performance Measurement Score for the Balanced Scorecard, Int. J. of Data Analysis Techniques and Strategies, Vol., No. 3, pp [3] William P. Fox, Teaching the Applications of Optimization in Game Theory s Zero Sum and Non-zero Sum Games, Int. J. of Data Analysis Techniques and Strategies, Vol., No. 3, pp , 00. [4] Joshi, V. D., E. Singh, N. Gupta, Primal-Dual Approach to Solve LFP Problem, J. of Applied Mathematics, Statistics and Informations, Vol. 4, No., 008. [5] Dorn, W. S., Linear Fractional Programming, IBM Research Report, 96. [6] Swarup, K., Linear Fractional Programming, Operations Research, Vol. 3, No. 6, pp , 964. [7] Das H. K. & M. Babul Hasan, An Algorithm and its Computer Technique for Solving Game Problems using LP Method, International Journal of Basic & Applied Sciences, Vol. -3, pp. 7-7, 0. Vol. 5, No., July-December 03 99
12 H. K. Das, M. B. Hasan and A. Islam APPENDIX A Review of the Different Methods by Numerical Experiments The following test problems are used to assess the workability of our method for solving LFP and LFBV problems. The analytical properties of these problems are used to compare with different methods. Type I: Finite Optimum at a Finite Point If the constraint set or the feasible region X is bounded and the denominator + > 0, for all x X, each of the four algorithm can successfully solve the problem. The method of Charnes-Cooper [96] and Dorn [96] are equivalent though former is primal type and later is of dual type simplex method. One can observed that the technique of Charnes- Cooper (96), Swarup (964) and Dorn (96) check for optimality of fractional program at each step of the Simplex method. If the constraint set or the feasible region is unbounded, the method of Swarup may fail, whereas the method of Charnes-Cooper and Dorn will always recognize and stop at an optimal point, if such a point is reached. We now illustrate this difference by following simple example. Example 5 Solution in Our Method = 0 =, with Z max = 7/ The converted new LP for the above problem is given by Maximize F(y) = y + 7y 7 subject to, 4y + y, 6y, y, y The detailed result can be found in Appendix C. On the other hand, if the constraint set or the feasible region is unbounded, the method of Swarup may fail, whereas the method of Charnes-Cooper and Dorn will always recognize and stop at an optimal point, if such a point is reached. We now illustrate this difference by following simple example. Example 6 MaxZ 4x 7 5x s.t x Maximize Z 4x 7 5x Subject to x On the contrary if we apply the method of Charnes & Cooper transformation technique to solve the (EP) of Example, an optimal basic feasible solution to the problem (EP) is: t, s, y, s y 0 with Zmax Similarly, using Swarup simplex type the method, one can show that the solution is: 7 By the method of Dorn, we have reached to the optimal solution and the optimal solution is x (0,,) with Z 7 / max Similarly, by using the method of Charnes-Cooper, one can obtain the following results: = and x = with Z max = 7/ Using the method of Swarup simplex type method to solve the abovelfp, we conclude that the solution is unbounded. From the above discussion, we conclude that in case of Type I problem, if the constraint set is unbounded, the method of Dorn as well as Charnes-Cooper give the correct solution but the method of Swarup fails to identify to correct solution. Solution in Our Method The converted new LP for the above problem is given by: Maximize F(y) = y + 7y 7 Subject to, 4y + y, 6y, y, y The detailed result can be found in Appendix C. 00 I J D A I S Serials Publications
13 Type II: Infinite Optimum at a Finite Point If the constraint set or the feasible region X is bounded and the denominator +ßchanges sign over the feasible region, only the method of Dorn give the correct solution, but the method of Charnes-Cooper and Swarup fail to identify the correct solution. We now illustrate this difference by following simple example. Example 7 Subject to x Maximize z x 3x x Subject to 3 + x 3 This gives us an unbounded solution. Clearly, one can easily observe that the (EN) is inconsistent. Thus by using the method of Charnes-Cooper, we conclude that the solution is unbounded. Similarly, using Swarup simplex type the method, one can show that the solution is unbounded. Solution in Our Method The converted new LP for the above problem is given by Maximize F(y) = y + 3y Subject to, y + y, y, y, y By using the method of Charnes-Cooper, one can obtain = 0 = 3/ with Z max = 9. We can easily observe that the denominator + = x + changes sign over the feasible region X. On the contrary, if we apply Swarup simplex type method to solve the above problem, we obtained the solution = 0, x = 3/ with Z max = 9 which is not correct optimal solution. Solution in Our Method The converted new LP for the above problem is given by Maximize F(y) = y + 3y Subject to, 5y y 3, 4y y 3, y, y The detailed result can be found in Appendix C. Type III: Infinite Optimum at an Infinite Point If the constraint set or the feasible region X is unbounded, the method of Swarup and Charnes Cooper may fail, whereas the method of Dorn will always recognize and stop at an optimal point, if such a point is reached. We now illustrate this difference by following simple example. Example 8 Maximize z x 3x Table 6 Numerical Experiments of three Special Cases Type Problematic Discussion I II III If the constraint set or the feasible region is bounded and the denominator + > 0, x X, the method of Charnes & Cooper (96), Swarup (964), and Dorn (96) can successfully solve the problem (Example ). But if the region is unbounded, the method of Swraup fails to recognize the solution; whereas the method of Charnes& Cooper and Dorn stop at an optimal point, if such a point is reached (Example ). If the constraint set or the feasible region is bounded and the denominator + > 0, x X, changes sign, the method of Charnes & Cooper and Swarup fails to identify the optimal solution; whereas the method of Dorn gives the correct optimal solution (Example 3). If the constraint set or the feasible region is unbounded the method of Charnes & Cooper and Swarup fails to recognize the solution; whereas the Dorn method always recognize and stop at an optimal point, if such a point is reached (Example 4). I, II & Our method is success for all the cases shown in III s appendix A &C. Example 9 Suppose that the financial advisor of a university s endowment fund must invest up to Tk00, 000 in two types of securities: bond 7stars, paying a dividend of 7%, and stock MaxMay, Vol. 5, No., July-December 03 0
14 H. K. Das, M. B. Hasan and A. Islam paying a dividend of 9%. The advisor has been advised that no more than Tk30, 000 can be invested in stock Max May, while the amount invested in bond 7 stars must be at least twice the amount invested in stock Max May. Independent of the amount to be invested, the service of the broker company which serves the advisor costs Tk00 (Erik B. Bajalinov []. How much should be invested in each security to maximize the efficiency of invested? The resulting LFP is as follows: Maximize R( x, x ).07x 0.9x Z D ( x, x ) x 00 Subject to x + x 00000; x ; x 30000; Thus Bitran and Novaes methodfails to solve the problem. Solution using Our Method Here we have c = (0.07,0.09), d = (, ), = 0, = 00, A = (, ), b = 00000, A = (, ), b = 0, A 3 = (0, ), b3 = Where A, b are related to the first constraint and A, b are related to the second constraint and A 3, b3are related to the third constraint. The new objective function Maximize: F(y) = [(0.07, 0.09) 0/ 00 (, )] y + 0/00 = 0.07y y [using Section 3.] The first constraint we have, [(, ) /00 (, )] y 00000/00 [According to (3.3)] [(, ) + (000, 000)] y y + 00y 000 Similarly, the second and third constraints are y y and 300y + 30y 300, respectively. The new LP problem, Maximize F(y) = 0.07y y Subjectto, 00y + 000y 000, y y,300y + 30y 300, y, y After some calculation we get ( ) = ( y, y ) ( x, x ) (,)( y, y ) = (60000, 30000) and putting this value in the original objective function, we have Z = So, the optimal value is Z = and the optimal solution is = 60000, x = Finally, we conclude that if the problem LFP then some of the existing method may fail whereas the dual type algorithm of Dorn is the best method for solving all types of LFP though it involves more calculations. But it is not successful for solving LFBV problems.but our method& technique is success for both of the problem of LFP and LFBV problems APPENDIX B Mathematica Code for Solving LFP & LFBV Problems In this section, we develop a computer program incorporated with method developed by us. This program [7] similar but not same which find all the feasible solutions to the feasible region of the resulting LP and then obtains the optimal solution. The interested reader for coding are requested to contact the authors. APPENDIX C Programming Input Output System At first we have to convert the LFP or LFBV into LP using our Propose Method for Solving LFP and LFBV in the Section 3. Then we have to take input data of theproblemsusingthe run file to get the output. To get the result, we have to give some data of the corresponding problem and they are number of rows, number of columns, number of constraints, row elements, right hand side constants, cost vectors etc. When we give the necessary input we get the output of the corresponding problem. In this case, we get the following output and we have given here the output of the program with the final table and the solution point of our problem. Programming input commands are same for all examples. Input Clear [u, r, y] main [twobasic]; original [solution] Output of Example The optimal value of Original LF or LFVB is 75 8 LF LFBV or Original Point problem Solution x[] = 6 5 LF LFBV or Original Point problem Solution x[] = 4 LF LFBV or Original Point problem Solution x[3] = 0 Lf LFBV or Original Point problem Solution x[4] = 6 5 Output of Example 5 The optimal value of Original LF or LFVB is LF LFBV or Original Point problem Solution x[] = 0 LF LFBV or Original Point problem Solution x[] = Output of Example 6 The optimal value of Original LF or LFVB is LF LFBV or Original Point problem Solution x[] = 0 LF LFBV or Original point problem Solution x[] = Output of Example 7 Table -- cj RHS C B Basis X X S S -- 0 S S 4 0 Null C C j j Z j I J D A I S Serials Publications
15 Ratio is not possible; Unbounded Solution Output of Example 8 Ratio is not possible; Unbounded Solution Output of Example 9 The optimal value of Original LF or LFVB is LF LFBV or Original Point problem Solution x[] = LF LFBV or Original Point problem Solution x[] = Direct Mathematica Command Exit [] Maximize [ (.07 * x 0.09 * x), ( x * x * 00) { 00000, x , }, { }] { , { 598.4, x 99.}} TimeUsed [ ] Vol. 5, No., July-December 03 03
A Computer Technique for Duality Theory in Linear Programs
American Journal of Applied Mathematics 2015; 3(3): 95-99 Published online April 23, 2015 (http://www.sciencepublishinggroup.com/j/ajam) doi: 10.11648/j.ajam.20150303.13 ISSN: 2330-0043 (Print); ISSN:
More informationAn Alternative Approach for Solving Extreme Point Linear and Linear Fractional Programming Problems
Dhaka Univ. J. Sci. 63(2): 77-84, 2015 (July) An Alternative Approach for Solving Extreme Point Linear and Linear Fractional Programming Problems Touhid Hossain, Md. Rajib Arefin and Md. Ainul Islam Department
More informationA Computer Oriented Method for Solving Transportation Problem
Dhaka Univ. J. Sci. 63(1): 1-7, 015 (January) A Computer Oriented Method for Solving Transportation Problem Sharmin Afroz and M. Babul Hasan* Department of Mathematics, Dhaka University, Dhaka-1000, Bangladesh
More informationPart 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm
In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.
More informationSome Advanced Topics in Linear Programming
Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,
More informationLinear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25
Linear Optimization Andongwisye John Linkoping University November 17, 2016 Andongwisye John (Linkoping University) November 17, 2016 1 / 25 Overview 1 Egdes, One-Dimensional Faces, Adjacency of Extreme
More informationDM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini
DM545 Linear and Integer Programming Lecture 2 The Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 3. 4. Standard Form Basic Feasible Solutions
More informationSection Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017
Section Notes 5 Review of Linear Programming Applied Math / Engineering Sciences 121 Week of October 15, 2017 The following list of topics is an overview of the material that was covered in the lectures
More informationLinear Programming Problems
Linear Programming Problems Two common formulations of linear programming (LP) problems are: min Subject to: 1,,, 1,2,,;, max Subject to: 1,,, 1,2,,;, Linear Programming Problems The standard LP problem
More informationAdvanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs
Advanced Operations Research Techniques IE316 Quiz 2 Review Dr. Ted Ralphs IE316 Quiz 2 Review 1 Reading for The Quiz Material covered in detail in lecture Bertsimas 4.1-4.5, 4.8, 5.1-5.5, 6.1-6.3 Material
More informationLinear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.
University of Southern California Viterbi School of Engineering Daniel J. Epstein Department of Industrial and Systems Engineering ISE 330: Introduction to Operations Research - Deterministic Models Fall
More informationChapter II. Linear Programming
1 Chapter II Linear Programming 1. Introduction 2. Simplex Method 3. Duality Theory 4. Optimality Conditions 5. Applications (QP & SLP) 6. Sensitivity Analysis 7. Interior Point Methods 1 INTRODUCTION
More informationAdvanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs
Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material
More informationLinear programming and duality theory
Linear programming and duality theory Complements of Operations Research Giovanni Righini Linear Programming (LP) A linear program is defined by linear constraints, a linear objective function. Its variables
More informationBCN Decision and Risk Analysis. Syed M. Ahmed, Ph.D.
Linear Programming Module Outline Introduction The Linear Programming Model Examples of Linear Programming Problems Developing Linear Programming Models Graphical Solution to LP Problems The Simplex Method
More informationAn Improved Decomposition Algorithm and Computer Technique for Solving LPs
International Journal of Basic & Applied Sciences IJBAS-IJENS Vol: 11 No: 0 12 An Improved Decomposition Algorithm and Computer Technique for Solving LPs Md. Istiaq Hossain and M Babul Hasan Abstract -
More informationDiscrete Optimization. Lecture Notes 2
Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The
More information16.410/413 Principles of Autonomy and Decision Making
16.410/413 Principles of Autonomy and Decision Making Lecture 17: The Simplex Method Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 10, 2010 Frazzoli (MIT)
More information4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1
4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 Mathematical programming (optimization) problem: min f (x) s.t. x X R n set of feasible solutions with linear objective function
More informationMATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS
MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS GRADO EN A.D.E. GRADO EN ECONOMÍA GRADO EN F.Y.C. ACADEMIC YEAR 2011-12 INDEX UNIT 1.- AN INTRODUCCTION TO OPTIMIZATION 2 UNIT 2.- NONLINEAR PROGRAMMING
More informationMLR Institute of Technology
Course Name : Engineering Optimization Course Code : 56021 Class : III Year Branch : Aeronautical Engineering Year : 2014-15 Course Faculty : Mr Vamsi Krishna Chowduru, Assistant Professor Course Objective
More informationUNIT 2 LINEAR PROGRAMMING PROBLEMS
UNIT 2 LINEAR PROGRAMMING PROBLEMS Structure 2.1 Introduction Objectives 2.2 Linear Programming Problem (LPP) 2.3 Mathematical Formulation of LPP 2.4 Graphical Solution of Linear Programming Problems 2.5
More informationCOLUMN GENERATION IN LINEAR PROGRAMMING
COLUMN GENERATION IN LINEAR PROGRAMMING EXAMPLE: THE CUTTING STOCK PROBLEM A certain material (e.g. lumber) is stocked in lengths of 9, 4, and 6 feet, with respective costs of $5, $9, and $. An order for
More informationA PRIMAL-DUAL EXTERIOR POINT ALGORITHM FOR LINEAR PROGRAMMING PROBLEMS
Yugoslav Journal of Operations Research Vol 19 (2009), Number 1, 123-132 DOI:10.2298/YUJOR0901123S A PRIMAL-DUAL EXTERIOR POINT ALGORITHM FOR LINEAR PROGRAMMING PROBLEMS Nikolaos SAMARAS Angelo SIFELARAS
More informationIntroduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras
Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Module - 05 Lecture - 24 Solving LPs with mixed type of constraints In the
More informationGraphs that have the feasible bases of a given linear
Algorithmic Operations Research Vol.1 (2006) 46 51 Simplex Adjacency Graphs in Linear Optimization Gerard Sierksma and Gert A. Tijssen University of Groningen, Faculty of Economics, P.O. Box 800, 9700
More informationSimulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016
Simulation Lecture O Optimization: Linear Programming Saeed Bastani April 06 Outline of the course Linear Programming ( lecture) Integer Programming ( lecture) Heuristics and Metaheursitics (3 lectures)
More informationDEGENERACY AND THE FUNDAMENTAL THEOREM
DEGENERACY AND THE FUNDAMENTAL THEOREM The Standard Simplex Method in Matrix Notation: we start with the standard form of the linear program in matrix notation: (SLP) m n we assume (SLP) is feasible, and
More informationA PARAMETRIC SIMPLEX METHOD FOR OPTIMIZING A LINEAR FUNCTION OVER THE EFFICIENT SET OF A BICRITERIA LINEAR PROBLEM. 1.
ACTA MATHEMATICA VIETNAMICA Volume 21, Number 1, 1996, pp. 59 67 59 A PARAMETRIC SIMPLEX METHOD FOR OPTIMIZING A LINEAR FUNCTION OVER THE EFFICIENT SET OF A BICRITERIA LINEAR PROBLEM NGUYEN DINH DAN AND
More informationLecture Notes 2: The Simplex Algorithm
Algorithmic Methods 25/10/2010 Lecture Notes 2: The Simplex Algorithm Professor: Yossi Azar Scribe:Kiril Solovey 1 Introduction In this lecture we will present the Simplex algorithm, finish some unresolved
More informationMathematical and Algorithmic Foundations Linear Programming and Matchings
Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis
More informationLinear programming II João Carlos Lourenço
Decision Support Models Linear programming II João Carlos Lourenço joao.lourenco@ist.utl.pt Academic year 2012/2013 Readings: Hillier, F.S., Lieberman, G.J., 2010. Introduction to Operations Research,
More informationLecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.
Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject
More informationInterpretation of Dual Model for Piecewise Linear. Programming Problem Robert Hlavatý
Interpretation of Dual Model for Piecewise Linear 1 Introduction Programming Problem Robert Hlavatý Abstract. Piecewise linear programming models are suitable tools for solving situations of non-linear
More informationOutline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014
5/2/24 Outline CS38 Introduction to Algorithms Lecture 5 May 2, 24 Linear programming simplex algorithm LP duality ellipsoid algorithm * slides from Kevin Wayne May 2, 24 CS38 Lecture 5 May 2, 24 CS38
More informationRead: H&L chapters 1-6
Viterbi School of Engineering Daniel J. Epstein Department of Industrial and Systems Engineering ISE 330: Introduction to Operations Research Fall 2006 (Oct 16): Midterm Review http://www-scf.usc.edu/~ise330
More informationAM 121: Intro to Optimization Models and Methods Fall 2017
AM 121: Intro to Optimization Models and Methods Fall 2017 Lecture 10: Dual Simplex Yiling Chen SEAS Lesson Plan Interpret primal simplex in terms of pivots on the corresponding dual tableau Dictionaries
More informationCivil Engineering Systems Analysis Lecture XIV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics
Civil Engineering Systems Analysis Lecture XIV Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics Today s Learning Objectives Dual 2 Linear Programming Dual Problem 3
More informationIntroduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras
Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Module 03 Simplex Algorithm Lecture - 03 Tabular form (Minimization) In this
More informationArtificial Intelligence
Artificial Intelligence Combinatorial Optimization G. Guérard Department of Nouvelles Energies Ecole Supérieur d Ingénieurs Léonard de Vinci Lecture 1 GG A.I. 1/34 Outline 1 Motivation 2 Geometric resolution
More informationIntroduction. Linear because it requires linear functions. Programming as synonymous of planning.
LINEAR PROGRAMMING Introduction Development of linear programming was among the most important scientific advances of mid-20th cent. Most common type of applications: allocate limited resources to competing
More informationSection Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018
Section Notes 4 Duality, Sensitivity, and the Dual Simplex Algorithm Applied Math / Engineering Sciences 121 Week of October 8, 2018 Goals for the week understand the relationship between primal and dual
More informationReal life Problem. Review
Linear Programming The Modelling Cycle in Decision Maths Accept solution Real life Problem Yes No Review Make simplifying assumptions Compare the solution with reality is it realistic? Interpret the solution
More information5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY
5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY 5.1 DUALITY Associated with every linear programming problem (the primal) is another linear programming problem called its dual. If the primal involves
More informationCollege of Computer & Information Science Fall 2007 Northeastern University 14 September 2007
College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007 CS G399: Algorithmic Power Tools I Scribe: Eric Robinson Lecture Outline: Linear Programming: Vertex Definitions
More informationTHE simplex algorithm [1] has been popularly used
Proceedings of the International MultiConference of Engineers and Computer Scientists 207 Vol II, IMECS 207, March 5-7, 207, Hong Kong An Improvement in the Artificial-free Technique along the Objective
More informationIntroduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs
Introduction to Mathematical Programming IE496 Final Review Dr. Ted Ralphs IE496 Final Review 1 Course Wrap-up: Chapter 2 In the introduction, we discussed the general framework of mathematical modeling
More informationOptimization of Design. Lecturer:Dung-An Wang Lecture 8
Optimization of Design Lecturer:Dung-An Wang Lecture 8 Lecture outline Reading: Ch8 of text Today s lecture 2 8.1 LINEAR FUNCTIONS Cost Function Constraints 3 8.2 The standard LP problem Only equality
More informationA Comparative study on Algorithms for Shortest-Route Problem and Some Extensions
International Journal of Basic & Applied Sciences IJBAS-IJENS Vol: No: 0 A Comparative study on Algorithms for Shortest-Route Problem and Some Extensions Sohana Jahan, Md. Sazib Hasan Abstract-- The shortest-route
More informationIntroduction to Linear Programing Problems
Paper: Linear Programming and Theory of Games Lesson: Introduction to Linear Programing Problems Lesson Developers: DR. MANOJ KUMAR VARSHNEY, College/Department: Department of Statistics, Hindu College,
More informationCDG2A/CDZ4A/CDC4A/ MBT4A ELEMENTS OF OPERATIONS RESEARCH. Unit : I - V
CDG2A/CDZ4A/CDC4A/ MBT4A ELEMENTS OF OPERATIONS RESEARCH Unit : I - V UNIT I Introduction Operations Research Meaning and definition. Origin and History Characteristics and Scope Techniques in Operations
More informationLinear Programming Terminology
Linear Programming Terminology The carpenter problem is an example of a linear program. T and B (the number of tables and bookcases to produce weekly) are decision variables. The profit function is an
More informationLP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008
LP-Modelling dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven January 30, 2008 1 Linear and Integer Programming After a brief check with the backgrounds of the participants it seems that the following
More informationLinear Programming: Introduction
CSC 373 - Algorithm Design, Analysis, and Complexity Summer 2016 Lalla Mouatadid Linear Programming: Introduction A bit of a historical background about linear programming, that I stole from Jeff Erickson
More informationAn Improved Subgradiend Optimization Technique for Solving IPs with Lagrangean Relaxation
Dhaka Univ. J. Sci. 61(2): 135-140, 2013 (July) An Improved Subgradiend Optimization Technique for Solving IPs with Lagrangean Relaxation M. Babul Hasan and Md. Toha De epartment of Mathematics, Dhaka
More informationChapter 15 Introduction to Linear Programming
Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of
More informationA new method for solving fuzzy linear fractional programs with Triangular Fuzzy numbers
A new method for solving fuzzy linear fractional programs with Triangular Fuzzy numbers Sapan Kumar Das A 1, S. A. Edalatpanah B 2 and T. Mandal C 1 1 Department of Mathematics, National Institute of Technology,
More informationLecture 9: Linear Programming
Lecture 9: Linear Programming A common optimization problem involves finding the maximum of a linear function of N variables N Z = a i x i i= 1 (the objective function ) where the x i are all non-negative
More informationSolutions for Operations Research Final Exam
Solutions for Operations Research Final Exam. (a) The buffer stock is B = i a i = a + a + a + a + a + a 6 + a 7 = + + + + + + =. And the transportation tableau corresponding to the transshipment problem
More information3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs
11 3.1 Forms of linear programs... 12 3.2 Basic feasible solutions... 13 3.3 The geometry of linear programs... 14 3.4 Local search among basic feasible solutions... 15 3.5 Organization in tableaus...
More informationTribhuvan University Institute Of Science and Technology Tribhuvan University Institute of Science and Technology
Tribhuvan University Institute Of Science and Technology Tribhuvan University Institute of Science and Technology Course Title: Linear Programming Full Marks: 50 Course No. : Math 403 Pass Mark: 17.5 Level
More informationLinear Programming Motivation: The Diet Problem
Agenda We ve done Greedy Method Divide and Conquer Dynamic Programming Network Flows & Applications NP-completeness Now Linear Programming and the Simplex Method Hung Q. Ngo (SUNY at Buffalo) CSE 531 1
More informationLinear Programming. Larry Blume. Cornell University & The Santa Fe Institute & IHS
Linear Programming Larry Blume Cornell University & The Santa Fe Institute & IHS Linear Programs The general linear program is a constrained optimization problem where objectives and constraints are all
More informationAn introduction to pplex and the Simplex Method
An introduction to pplex and the Simplex Method Joanna Bauer Marc Bezem Andreas Halle November 16, 2012 Abstract Linear programs occur frequently in various important disciplines, such as economics, management,
More informationTuesday, April 10. The Network Simplex Method for Solving the Minimum Cost Flow Problem
. Tuesday, April The Network Simplex Method for Solving the Minimum Cost Flow Problem Quotes of the day I think that I shall never see A poem lovely as a tree. -- Joyce Kilmer Knowing trees, I understand
More information6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality
6.854 Advanced Algorithms Scribes: Jay Kumar Sundararajan Lecturer: David Karger Duality This lecture covers weak and strong duality, and also explains the rules for finding the dual of a linear program,
More informationCopyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch.
Iterative Improvement Algorithm design technique for solving optimization problems Start with a feasible solution Repeat the following step until no improvement can be found: change the current feasible
More informationLECTURES 3 and 4: Flows and Matchings
LECTURES 3 and 4: Flows and Matchings 1 Max Flow MAX FLOW (SP). Instance: Directed graph N = (V,A), two nodes s,t V, and capacities on the arcs c : A R +. A flow is a set of numbers on the arcs such that
More informationLinear Programming. Course review MS-E2140. v. 1.1
Linear Programming MS-E2140 Course review v. 1.1 Course structure Modeling techniques Linear programming theory and the Simplex method Duality theory Dual Simplex algorithm and sensitivity analysis Integer
More informationOPERATIONS RESEARCH. Linear Programming Problem
OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com 1.0 Introduction Linear programming
More informationGeneralized Network Flow Programming
Appendix C Page Generalized Network Flow Programming This chapter adapts the bounded variable primal simplex method to the generalized minimum cost flow problem. Generalized networks are far more useful
More informationThe Simplex Algorithm
The Simplex Algorithm Uri Feige November 2011 1 The simplex algorithm The simplex algorithm was designed by Danzig in 1947. This write-up presents the main ideas involved. It is a slight update (mostly
More information1. Lecture notes on bipartite matching February 4th,
1. Lecture notes on bipartite matching February 4th, 2015 6 1.1.1 Hall s Theorem Hall s theorem gives a necessary and sufficient condition for a bipartite graph to have a matching which saturates (or matches)
More informationSphere Methods for LP
Sphere Methods for LP Katta G. Murty Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109-2117, USA Phone: 734-763-3513, Fax: 734-764-3451 murty@umich.edu www-personal.engin.umich.edu/
More information1 Linear programming relaxation
Cornell University, Fall 2010 CS 6820: Algorithms Lecture notes: Primal-dual min-cost bipartite matching August 27 30 1 Linear programming relaxation Recall that in the bipartite minimum-cost perfect matching
More informationMath Models of OR: The Simplex Algorithm: Practical Considerations
Math Models of OR: The Simplex Algorithm: Practical Considerations John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA September 2018 Mitchell Simplex Algorithm: Practical Considerations
More informationAhigh school curriculum in Algebra 2 contains both solving systems of linear equations,
The Simplex Method for Systems of Linear Inequalities Todd O. Moyer, Towson University Abstract: This article details the application of the Simplex Method for an Algebra 2 class. Students typically learn
More informationLinear Programming and its Applications
Linear Programming and its Applications Outline for Today What is linear programming (LP)? Examples Formal definition Geometric intuition Why is LP useful? A first look at LP algorithms Duality Linear
More informationNOTATION AND TERMINOLOGY
15.053x, Optimization Methods in Business Analytics Fall, 2016 October 4, 2016 A glossary of notation and terms used in 15.053x Weeks 1, 2, 3, 4 and 5. (The most recent week's terms are in blue). NOTATION
More informationLesson 17. Geometry and Algebra of Corner Points
SA305 Linear Programming Spring 2016 Asst. Prof. Nelson Uhan 0 Warm up Lesson 17. Geometry and Algebra of Corner Points Example 1. Consider the system of equations 3 + 7x 3 = 17 + 5 = 1 2 + 11x 3 = 24
More informationNotes for Lecture 18
U.C. Berkeley CS17: Intro to CS Theory Handout N18 Professor Luca Trevisan November 6, 21 Notes for Lecture 18 1 Algorithms for Linear Programming Linear programming was first solved by the simplex method
More informationMath 5593 Linear Programming Lecture Notes
Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................
More informationMath Introduction to Operations Research
Math 300 Introduction to Operations Research Examination (50 points total) Solutions. (6 pt total) Consider the following linear programming problem: Maximize subject to and x, x, x 3 0. 3x + x + 5x 3
More informationConvex Optimization CMU-10725
Convex Optimization CMU-10725 2. Linear Programs Barnabás Póczos & Ryan Tibshirani Please ask questions! Administrivia Lecture = 40 minutes part 1-5 minutes break 35 minutes part 2 Slides: http://www.stat.cmu.edu/~ryantibs/convexopt/
More informationIntroduction to Operations Research
- Introduction to Operations Research Peng Zhang April, 5 School of Computer Science and Technology, Shandong University, Ji nan 5, China. Email: algzhang@sdu.edu.cn. Introduction Overview of the Operations
More informationAN ALGORITHM FOR RESTRICTED NORMAL FORM TO SOLVE DUAL TYPE NON-CANONICAL LINEAR FRACTIONAL PROGRAMMING PROBLEM
RAC Univerity Journal, Vol IV, No, 7, pp 87-9 AN ALGORITHM FOR RESTRICTED NORMAL FORM TO SOLVE DUAL TYPE NON-CANONICAL LINEAR FRACTIONAL PROGRAMMING PROLEM Mozzem Hoain Department of Mathematic Ghior Govt
More informationAn iteration of the simplex method (a pivot )
Recap, and outline of Lecture 13 Previously Developed and justified all the steps in a typical iteration ( pivot ) of the Simplex Method (see next page). Today Simplex Method Initialization Start with
More informationLinear Programming. Linear Programming. Linear Programming. Example: Profit Maximization (1/4) Iris Hui-Ru Jiang Fall Linear programming
Linear Programming 3 describes a broad class of optimization tasks in which both the optimization criterion and the constraints are linear functions. Linear Programming consists of three parts: A set of
More information16.410/413 Principles of Autonomy and Decision Making
16.410/413 Principles of Autonomy and Decision Making Lecture 16: Mathematical Programming I Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 8, 2010 E. Frazzoli
More informationFundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology Madras.
Fundamentals of Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology Madras Lecture No # 06 Simplex Algorithm Initialization and Iteration (Refer Slide
More informationINTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING
INTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING DAVID G. LUENBERGER Stanford University TT ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California London Don Mills, Ontario CONTENTS
More informationCS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29
CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 29 CS 473: Algorithms, Spring 2018 Simplex and LP Duality Lecture 19 March 29, 2018
More informationContents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.
page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5
More informationInteger Programming Theory
Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x
More informationLinear Programming. them such that they
Linear Programming l Another "Sledgehammer" in our toolkit l Many problems fit into the Linear Programming approach l These are optimization tasks where both the constraints and the objective are linear
More informationLECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach
LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction
More informationSolving Linear Programs Using the Simplex Method (Manual)
Solving Linear Programs Using the Simplex Method (Manual) GáborRétvári E-mail: retvari@tmit.bme.hu The GNU Octave Simplex Solver Implementation As part of the course material two simple GNU Octave/MATLAB
More information3 Interior Point Method
3 Interior Point Method Linear programming (LP) is one of the most useful mathematical techniques. Recent advances in computer technology and algorithms have improved computational speed by several orders
More informationCS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36
CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 36 CS 473: Algorithms, Spring 2018 LP Duality Lecture 20 April 3, 2018 Some of the
More informationAMATH 383 Lecture Notes Linear Programming
AMATH 8 Lecture Notes Linear Programming Jakob Kotas (jkotas@uw.edu) University of Washington February 4, 014 Based on lecture notes for IND E 51 by Zelda Zabinsky, available from http://courses.washington.edu/inde51/notesindex.htm.
More information