Preprocessing DEA. February to provide tools that will reduce the computational burden of DEA studies especially in large scale applications.

Size: px
Start display at page:

Download "Preprocessing DEA. February to provide tools that will reduce the computational burden of DEA studies especially in large scale applications."

Transcription

1 Preprocessing DEA. J.H. Dulá 1 and F.J. López 2 February 2007 Statement of Scope and Purpose. This is a comprehensive study of preprocessing in DEA. The purpose is to provide tools that will reduce the computational burden of DEA studies especially in large scale applications. Abstract. We collect, organize, analyze, implement, test, and compare a comprehensive list of ideas for preprocessors for entity classification in DEA. We limit our focus to procedures that do not involve solving LPs. The procedures are adaptations from previous work in DEA and in computational geometry. The result is five preprocessing methods three of which are new for DEA. Testing shows that preprocessors have the potential to classify a large number of DMUs economically making them an important computational tool especially in large scale applications. Key Words: DEA, DEA computations, linear programming, and computational geometry. 1. Introduction. The paper by Charnes et al. [1] which introduced DEA also offered the first linear program (LP) formulation for classifying and scoring DMUs. The approach proposed there, and the standard practice to this day, is to formulate and solve one LP for each entity with the LP s size determined by the dimensions of the matrix generated by the full data set. This approach for efficiency classification and scoring in DEA is computationally intensive. Although much is known about accelerating the process, solving LPs places heavy computational demands on the process especially in large scale applications. A preprocessor in DEA is a procedure that can quickly, efficiently, and conclusively classify and/or score a DMU without solving an LP. This excludes methods that reduce computational requirements which somehow involve solving LPs either by accelerating their performance or extracting from them opportunistic information for classification. Preprocessors are not expected to 1 Corresponding author, School of Business, Virginia Commonwealth University, Richmond, VA 23284, jdula@vcu.edu. 2 College of Business Administration, University of Texas at El Paso, El Paso, TX 79968, fjlopez@utep.edu. Page 1

2 Page 2 Dulá &López conclusively classify or score all the DMUs in a DEA study. They are intended to either reduce the total number of LPs that will eventually have to be solved and/or to reduce their size so that they can be solved faster. Preprocessors have a long tradition in DEA computations which includes the works by Sueyoshi and Chang [2], Sueyoshi [3], and Ali [4]. In the current work, we collect and analyze the methods that have been proposed for preprocessing in DEA and introduce three new ones. Two of the new ones are adaptations of work in the related field of computational geometry and are used to identify efficient DMUs. The third preprocessor, HyperTwist, is in a category of preprocessors based on hyperplane rotation for uncovering efficient DMUs. This category of preprocessors appears here first. We classify, formalize, analyze, implement, and compare the different preprocessors. 2. The Role of preprocessors in DEA. The elements of a DEA study are: i) a model; i.e., a list of inputs and outputs that characterize the process, ii) a data set of the DMUs values for the attributes in the model, and iii) a returns to scale assumption about the transformation process. These elements define a production possibility set: the set of all viable inputs and outputs obtainable from all combinations of the data along with all possibilities from the free disposability consequences of the returns to scale assumption. The production possibility set is a convex polyhedral set with a portion of its boundary constituting the efficient frontier. A DMU is efficient if and only if it belongs to the efficient frontier of its production possibility set. One of the main objectives of any DEA study is the classification of entities as efficient or inefficient. This classification depends only on the three fundamental components above. A DEA study may also require the calculation of a score for each DMU and associated benchmarking information when these are inefficient. An entity s score is the objective function value of an LP and provides information about its relative position with respect to the efficient frontier. Different LP formulations provide different scores and benchmarks. Although scores are used in practice to classify DMUs, the solutions may not provide sufficient conditions for classification as in the case of the relaxed input and output oriented LP formulations. Scoring is not required in all DEA studies; evidence of this is the use of the familiar additive LP formulation of Charnes et al. [5]. These LPs provide necessary and sufficient conditions for classification but their scores are mostly useless since they maximize the 1-norm to the efficient frontier (Briec, [6]).

3 Preprocessing DEA. Page 3 All the preprocessors discussed here are used to classify DMUs. With the exception of studies requiring super-efficiency scores, advanced classification of an efficient DMU with an inexpensive preprocessor saves having to solve an LP altogether for that entity since it is automatically classified and scored. Any inefficient DMU that is classified by a preprocessor also obviates an LP solution if scores are not required. The value of an efficient, effective, and economical preprocessor in these situations is evident. Low cost classification of inefficient DMUs can also save work in the event that scores and benchmarks are required. If a DMU is inefficient, its data point can be omitted from the data matrix of any LP used for any remaining classifications and scoring. The technique based on this result is called Reduced Basis Entry (RBE) (Ali [4]). In experimental work by Barr and Durchholz [7] and Dulá [8] it has been shown that RBE can reduce computations in DEA by 50%. Therefore, the time to solve the smaller LPs that result after employing preprocessors is much less than half. This suggests that knowledge of inefficient DMUs prior to having to solve any LPs can result in substantial computational savings especially when relatively many are identified cheaply in large problems where the proportion of efficient to inefficient DMUs ( density ) is low. 3. Notation and assumptions. A point set consists of n points a j, j =1,...,n each with m dimensions. The set A collects the data points; i.e., A = {a 1,...,a n }. The ith coordinate of a j is denoted by a j i. DEA points are composed of two parts, as follows: [ ] X j a j = R m ; j =1,...,n; Y j where 0 X j 0 and 0 Y j 0 are the input and output data vectors, respectively, for DMU j. We assume that this set is reduced in the sense that no point is duplicated. We denote by H(π, β) ={y π, y = β} the hyperplane with normal vector π and level value β, where, is the inner product of two vectors. Our development focuses on the VRS production possibility set of DEA. The results are true for general polyhedral sets and therefore for the other DEA production possibility sets, if not immediately, with minor adjustments.

4 Page 4 Dulá &López 4. Background and preprocessing principles. Classifying a DMU as efficient or inefficient is essentially equivalent to identifying boundary and interior points in a finitely generated polyhedral set; that is, a polyhedron defined by linear combinations of the elements of a point set (Dulá and López [9]). In DEA, the point set is composed of the data for n DMUs each characterized by m attribute values. The polyhedral set they generate, depending on the returns to scale assumption, is the production possibility set. General polyhedral sets can have many shapes. They range from an unbounded polyhedron, as in DEA, to a fully bounded polytope, such as the convex hull of the point set. The problem of identifying boundary and/or interior points in finitely generated polyhedral sets appears in other areas. It is familiar in computational geometry, point depth analysis in nonparametric multivariate statistics, redundancy in systems of linear inequalities, and stochastic programming (see Dulá and López [9] for more details and references). Ideas for preprocessing convex hulls to identify extreme points appear in Dulá et al. [10]. Many results from these areas are available to DEA, but more relevant here will be the work on preprocessing for convex hulls. Previous research on preprocessors for finitely generated polyhedral sets comes mainly from two sources from two different backgrounds; DEA and computational geometry. Dulá et al. [10] proposes preprocessors to identify extreme points an important class of boundary points, specifically for convex hulls. The procedures in [10] incorporate a variety of ideas ranging from simple sortings to calculating Euclidean distances to techniques based on inner products. Sueyoshi and Chang [2], Sueyoshi [3], and Ali [4] propose preprocessors specifically for DEA. Sueyoshi and Chang [2] introduce the concept of domination to identify inefficient DMUs. Ali [4] also looks at domination and proposes simple and effective preprocessors for identifying efficient DMUs based on sorting and inner products. Existing preprocessors can be classified as: i) approaches that identify boundary points of polyhedral sets, of which extreme points are especially important (e.g., efficient and extremeefficient DMUs in DEA), and ii) methodologies that identify interior points (e.g., inefficient DMUs in DEA). In the first category there are three basic ideas:

5 Preprocessing DEA. Page 5 Sorting. The points with the maximum value in each dimension in the VRS DEA model are extreme-efficient if unique. If not unique, they correspond to DMUs on the boundary, which means they may or may not be efficient (e.g., weak efficient). These points are identified by sorting the dimensions. The computational effort for this is minimal and has the potential of identifying m efficient DMUs in DEA. This idea has been used in Dulá et al. [10] in the context of general convex hulls, and in Ali [4] in the context of DEA. An adaptation to the DEA constant returns to scale model appears in Shaheen [11]. A DMU that generates a unique minimum ratio of its components with a projection s norm is necessarily an extreme ray of the production possibility set. At most m new CRS extremeefficient DMUs can be identified this way. Norm maximization. Dulá et al. [10] propose identifying extreme points of the convex hull by finding the element in A that maximizes the Euclidean distance to an arbitrary point, ˆp, in R m. This is a special case of a more general result given next: Result 1. Let aĵ = argmax { ˆp a j l ; j =1,...,n } where l is the l-norm of the argument. If aĵ is unique then it is an extreme point of the convex hull of A. Proof. See Appendix 1. The procedure in [10] is based on an application of Result 1 for the case of the 2-norm. In this norm, ties reveal additional extreme points of the convex hull. This will be true of any norm where support sets for a ball must contain exactly a single point which is not the case of the 1-norm and -norm. Result 1 can be adapted to a DEA VRS production possibility set to identify extreme-efficient DMUs when the focal point ˆp is judiciously chosen. For example, for the case of the 2-norm, one such point is the worst virtual DMU: ˆp i = min{a j i ; j = 1,...,n}; i. Notice that the -norm identifies one of the points revealed through sorting; namely the one with the overall largest component and, as we will see below, an idea proposed by Ali in [4] is an application of Result 1 to the 1-norm. Computations for a procedure based on norm maximization involve calculating and sorting inner products. contribution to computational geometry and DEA. Result 1 is a new

6 Page 6 Dulá &López Hyperplane translation. The values of the inner products of the data points in a point set, A, with some arbitrary vector ˆπ 0 attain a maximum value β at a point, say aĵ. The vector ˆπ and the value β define a hyperplane H(ˆπ, β ), which supports the convex hull of A at aĵ. If aĵ is the unique support element, this suffices to conclude that it is an extreme point. When there is more than one support element, all are boundary points. By restricting ˆπ >0, the maximum inner product, β, of the data points with this positive normal defines a hyperplane, H(ˆπ, β ), that supports the VRS production possibility set. If the support set is a singleton then the corresponding DMU is extreme-efficient. If the support set contains more points, all DMUs participating in the tie will be efficient. Since ˆπ >0, the support set cannot contain weak efficient DMUs. This idea can be visualized as translating a hyperplane in the direction of its normal through the polyhedral hull until it reaches the last point. The computational effort involves inner products and sorting. The idea for convex hulls was implemented and tested in Dulá et al. [10]. Ali s [4] output/input aggregation ratio (Lemma 3, p. 64) is a special case where a specific hyperplane with normal vector ˆπ =(1,...,1) is used. (This is also an application of Result 1 using the 1-norm). The new preprocessor based on hyperplane translation proposed below specifically for DEA allows the use of multiple hyperplanes restricted only by the condition ˆπ >0 on their normals. The second category of preprocessors identifies inefficient DMUs. approach in this category. We are aware of only one Domination. DMU ĵ totally dominates DMU j if aĵ a j. The dominated DMU is clearly inefficient. A full implementation of this preprocessor involves at most a quadratic number of vector subtractions and comparisons. This has the potential of identifying a large number of inefficient DMUs. The procedure applies to all return to scale assumptions. This idea was originally proposed by Sueyoshi and Chang [2]. In the next section we implement the preprocessing ideas described above for the case of the VRS DEA model. In addition, we introduce two new preprocessors for identifying efficient DMUs in DEA. The first, HyperTran, is an adaptation of the hyperplane translation procedure for convex hulls from Dulá et al. [10]. The second, HyperTwist, falls into an entirely new category based on

7 Preprocessing DEA. Page 7 hyperplane rotation. It results from adapting a procedure introduced in López and Dulá [12] to assess the impact of adding a new attribute to a DEA model. 5. Implementations. The preprocessing principles discussed in Section 4 can be the basis for a variety of methods for preprocessing in DEA. Methods can be deterministic or probabilistic, parameter-dependent or parameter-free, etc. In the spirit of consistency we propose methods based on the following guidelines: i) they are deterministic; ii) they do not depend on defining parameters that require experimental tuning ; and iii) the number of computations are bounded. The methods produced under these guidelines are not affected by convergence issues or other stopping criteria. This provides consistency across the different preprocessing principles and facilitates comparisons. We design five methods specifically for VRS DEA. The first two, DimensionSort and Dominator, have been used in DEA before. HyperTwist, are new to DEA. The other three preprocessors, MaxEuclid, HyperTran, and The DMUs in the data set are classified as follows: E is the set of efficient DMUs. E is the set of extreme-efficient DMUs. Note E E. I is the set of inefficient DMUs. Each method is formally presented below. The pseudo-codes are followed by a discussion.

8 Page 8 Dulá &López Sorting Method DimensionSort Procedure: DimensionSort [INPUT:] A. [OUTPUT:] Ê E A. Initialization. Ê. For i =1 to m, Do: j = argmax a j i ; j =1,...,n. j If j is unique, Then, Ê Ê {a j }. Else, Resolve Ties (Resolve ties by applying this procedure recursively to the points involved in the ties on the remaining dimensions). End if. Next i. Finalization. Ê contains extreme-efficient DMUs. Notes on Procedure DimensionSort. 1. Procedure DimensionSort identifies extreme-efficient DMUs. Each DMU that emerges from a pass will have at least one component which is the largest in its dimension. This is sufficient to conclude that the corresponding point is extreme in the production possibility set. This is guaranteed by the tie resolution procedure and by the no duplication assumption. The number of different extreme-efficient DMUs that this method identifies is at most m. 2. Each of the m iterations requires sorting n values. Note that tie resolution can be invoked as many as m 1 times within each iteration with possibly as many as n points participating in each tie.

9 Preprocessing DEA. Page 9 Domination Method: Dominator. Dominator [INPUT:] A. [OUTPUT:] Î I A. Initialization. Î. For j =1 to n, Do: For k =1 to n, k j, Do: If a k is still unclassified, If a j a k, Î Î {ak }. End if. End if. Next k. Next j. Finalization. Î contains inefficient DMUs. Notes on Procedure Dominator. 1. Procedure Dominator identifies exclusively inefficient DMUs including weak efficient. Dominator has the potential to identify a large subset of the inefficient DMUs. 2. The implementation is enhanced by omitting from subsequent comparisons points that are identified as inefficient. This means we can expect to handle fewer and fewer points as the procedure progresses. 3. Decomposition Schemes. Points in the interior of polyhedral sets defined by subsets of the data are also interior to the polyhedron generated by the complete point set. An effective preprocessor can be based on partitioning blocks to identify inefficient DMUs and repeating this with new blocks composed of entities with unknown status until a final single block is processed. This decomposition approach can be designed using Dominator to identify inefficient DMUs within blocks. Since an implementation requires a decision about the size of the initial and intermediary blocks, it will involve experimental tuning disqualifying it from further consideration in this article.

10 Page 10 Dulá &López 4. The procedure requires at most (n 1) 2 comparisons. The enhancement may reduce this substantially. Euclidean Distance Method: MaxEuclid Procedure: MaxEuclid [INPUT:] A. [OUTPUT:] Ê E A. Initialization. Ê. For i =1 to m, Do: ˆp i = min a j j i. Next i. j = argmax j a j ˆp, a j ˆp ; j =1,...,n. Ê Ê {a j }. Finalization. Ê contains extreme-efficient DMUs. Notes on Procedure MaxEuclid. 1. Procedure MaxEuclid applies a special case of Result 1 above. Although other norms may reveal different extreme-efficient DMUs, we anticipate that the same maximizer would emerge in many of these. 2. Only extreme points maximize the Euclidean distance from a properly selected focal point. Although the convex hull of the data when the focal point, ˆp, is included may generate extreme points that are not extreme-efficient DMUs, Result 2 in Appendix 1 demonstrates how any such points cannot maximize the 2-norm. For this reason, Procedure MaxEuclid only uncovers extreme-efficient DMUs and, in case of ties, all points are extreme-efficient although not necessarily on a common face. As many as all the extreme-efficient DMUs could be identified by MaxEuclid if they are located on the boundary of an m-dimensional

11 Preprocessing DEA. Page 11 hypersphere when the focal point ˆp is at the center. More realistically, MaxEuclid will identify one extreme-efficient DMU and more than that is unlikely. 3. Procedure MaxEuclid requires the calculation of a focal point. The point used by the procedure, ˆp, can be interpreted as a worst virtual DMU. Other focal points are possible. For example, focal points can be located such that the point set belongs to the positive orthant that the focal point determines. Some sort of parameter needs to be defined to generate a sequence of useful focal points that are sufficiently separated from each other so as to result in the identification of different extreme points. This parametric dependence violates our guidelines. 4. This implementation of the 2-norm for identifying extreme-efficient DMUs requires calculating and sorting n inner products. Note that the ordering of values obtained by the 2-norm is the same as that of their squares. Translating Hyperplanes Method: HyperTran. Procedure: HyperTran [INPUT:] A. [OUTPUT:] Ê E A. Initialization. Ê. For i =1 to m, Do: ˆp i = min (a j j i ɛ). Next i. For j =1 to n, Do: π j = a j ˆp j = argmax π j, a k ; k =1,...,n. k }. Ê Ê {aj Next j. Finalization. Ê contains efficient DMUs.

12 Page 12 Dulá &López Notes on Procedure HyperTran. 1. Procedure HyperTran identifies efficient DMUs by translating hyperplanes until they become supports for the production possibility set. Efficiency is assured by the fact that the normals, π j, of the supporting hyperplanes are strictly positive. The procedure is not confounded by weak efficiency. Every hyperplane translation will identify at least one extreme-efficient DMU; in case of ties, all are efficient. 2. The point ˆp used in the procedure is, again, the worst virtual DMU except it undergoes a slight perturbation to assure that π j > 0. As with MaxEuclid other focal points are possible. HyperTran has the potential to identify many of the efficient DMUs since every data point generates a translating hyperplane. 3. The procedure requires n 2 inner products and n sortings, one for each π j defined. Rotating Hyperplanes Method: HyperTwist. We use the following terms: u =(1,...,1) R m and the l-th unit vector in R m is e l,l = 1,...,m.

13 Preprocessing DEA. Page 13 [INPUT:] A. [OUTPUT:] Ê E A. Global Initialization. HyperTwist Ê. For l =1 to m, Do: Step 0. (Local Initialization) k =0; π l = u e l ; j0 = argmax π l,a j ; j =1,...,n. j Select j0 such that aj 0 l E E {a j 0 }. Step 1. (Pivot) i. k = k +1. ii. For j =1,...,n, Do: is max, π l,a j k 1 π l,a j, if (a j γ j (a j l = aj k 1 l aj k 1 l ) > 0; l ) M (large number), otherwise. Next j. iii. γ = min γ j. j iv. If γ = M, STOP. Otherwise go to Step 2. Step 2. (Find RHS) β = π l,a j k 1 + γ a j k 1 l. Step 3. } i. Define J = {j π l,a j + γ a j l = β. ii. Select j such that a j l is max for all j J. E E {a j }, Go to Step 1. Next l.

14 Page 14 Dulá &López Notes on Procedure HyperTwist. 1. The derivation of the results that make this procedure work has been relegated to an appendix. 2. The procedure generates a sequence of supporting hyperplanes changing their orientation as they visit extreme points of the production possibility set. Each change in orientation corresponds to a pivot operation. One pass of Procedure HyperTwist generates a sequence of supporting hyperplanes that partially wrap the production possibility set along the selected dimension, l. The procedure performs m passes, one for each dimension. The hyperplanes begin parallel to the l-th dimension and end orthogonal to it. In between, the hyperplanes twist and turn as if hinged at extreme points progressively higher in the l-th dimension. 3. We can see how HyperTwist works in the example depicted in Figure 1. The figure shows the sequence of rotating hyperplanes in one of the passes of the procedure. Here, Output 2 is the selected lth dimension of a three-dimensional VRS production possibility set. Each pivot takes place at a different extreme point. In this example, we see how three extreme-efficient DMUs are detected by the procedure. 4. The classification of a j 0 as extreme-efficient in the local initializations is a result of the same principles that apply for translating hyperplane methods. In the event of ties, other extreme points among them can also be classified as efficient. If additional ties occur for the maximum lth component, then these are all efficient. 5. In Steps 0 and 3ii, j must be selected such that a j l is max. If ties persist, it is convenient to choose an extreme point among them to serve as the new pivot point. This can be done expeditiously by identifying extreme values of the coordinates of the points in the tie. 6. In Step 3, J is the index set of all the data points on the same current supporting hyperplane defined by normal π l +γ e l and level value β. If the cardinality is two, then both are extreme points of the production possibility set and hence correspond to VRS extreme-efficient DMUs. If the cardinality is more than two and 0 <γ <M, then all the points involved in the tie are VRS efficient.

15 Preprocessing DEA. Page 15 Output 2 Output 2 Input Input Output 1 Initialization: First supporting hyperplane is parallel to reference dimension: Output 2. Output 1 Iteration 1: Supporting hyperplane after first pivot. Support set is an edge between two extreme points. Output 2 Output 2 Input Input Output 1 Output 1 Iteration 2: Supporting hyperplane after second pivot. Edge reveals third extreme point. Last iteration: Final pivot; hyperplane almost orthogonal to reference dimension. Figure 1. One pass of HyperTwist and the sequence of rotating supporting hyperplanes. 7. Each pass of HyperTwist will identify at least one extreme-efficient DMU in its local initialization. There is not, however, any guarantee that any more will be identified; the first pivot could be the last. 8. Computational requirements for HyperTwist involve only inner products, ratios, and sortings. Within each pass, the number of inner products per pivot is at most n although it can be expected to decrease sharply as the pivots progress. The maximum number of pivots in a pass is n. HyperTwist is another procedure with potential to identify many efficient DMUs. 6. Computational results. We tested the five procedures for preprocessing DEA to investigate their performance and how this is affected by the three most important data characteristics: cardinality (number of DMUs), dimension (total number of attributes), and density (proportion of efficient DMUs). We generated synthetic point sets with cardinalities 2500, 5000, 7500, and DMUs; in 5, 10, 15, and 20 dimensions; and with densities of 1%, 13%, and 25%. Note that the largest of these files can be considered large scale problems. The combination of four cardinalities,

16 Page 16 Dulá &López four dimensions, and three densities results in 48 data files. This synthetic problem suite allows us to control for the important DEA characteristics to obtain useful conclusions. To investigate the performance of the procedures on real data, we applied them to a data set from the Federal Financial Institutions Examination Council [13], which contains yearly data about commercial banks. The synthetic point sets were generated as follows. First, the efficient DMUs were generated as elements of the boundary of a hypersphere in the orthant defined by the input/output mix. The inefficient DMUs were generated by taking points on the boundary of the sphere and contracting them, radially, using a randomly generated factor from a triangular distribution. The idea of using this distribution was to make it more likely that the interior points would be close to the boundary of the production possibility set. Next, each dimension (attribute) was scaled using a factor randomly selected from a uniform distribution between 1 and This change does not affect the density of the point set but makes the data more realistic by making it less symmetric. The procedures were coded in Fortran. The experiments were performed on a dedicated Pentium 4 PC running at 2.66 GHz with 512 MB of RAM. Making comparisons between preprocessors based exclusively on number of classifications may be misleading since the corresponding cpu times tend to be quite different. We propose using as a common measure for comparisons the yield of the procedures, defined here as the number of classifications made per tenth of a cpu second. A classification is the identification of an inefficient DMU in the case of Dominator or an efficient DMU in all other cases. The yields reported below are the average of three runs. These results appear in Appendix 3. The computational effort required by DimensionSort and MaxEuclid was hardly measurable for most of the problems and their contribution is limited. DimensionSort identified m extremeefficient DMUs almost all the time (in a few cases it found fewer than m). MaxEuclid always identified exactly one extreme-efficient DMU. In both cases the clock did not record any usable cpu time and therefore no yields were calculated. These procedures essentially provide free classifications.

17 Preprocessing DEA. Page 17 For the remaining three implementations, it is useful as a baseline reference to report on results obtained in classifying DEA efficiency and inefficiency using LPs. We processed selected point sets with an LP formulation that starts with the full data set and applies the restricted basis entry (RBE) enhancement described in [4], [7], [8]. (The yields reported in Table 1 for traditional DEA studies using LPs are about twice those of unenhanced naive implementations ([7], [8])). Table 1. Yield of enhanced traditional LP approach for selected problems. Dimension Cardinality Density Yield % % % % The next three preprocessors were dramatically more effective in classifying DMUs than the previous two. Even though in general it is difficult to find predictable effects given that preprocessors are vulnerable to data peculiarities, in some instances it is possible to identify general patterns. We analyze next these preprocessing implementations. Our implementation of Dominator confirms what Sueyoshi and Chang [2] observed: it is a powerful preprocessor with the potential to classify a large number of inefficient DMUs at a low cost. Sueyoshi and Chang s initial implementations resulted in the identification of 100% of the inefficient DMUs. They proceeded to modify their problem generator to avoid this condition. Our inefficient DMUs (points) were generated with this issue in mind, hence the reason for the triangular distribution for the contraction factor in their generation. Even so, an average of 78.43% of the inefficient DMUs were totally dominated and thus identified by Dominator. Figure 2 illustrates representative cases of the behavior of the yield of Dominator controlling cardinality, dimension, and density. The effect of increased cardinality is a clear decrease in the yield of this procedure. This is to be expected. Even though the number of classifications can be expected to increase close to linearly given the assumption that density remains the same, the number of comparisons is almost quadratic in the number of DMUs. The impact of dimension is also predictable. Detecting whether a DMU dominates another requires comparing all coordinates,

18 Page 18 Dulá &López Yield Dimension 10; Density 25% Cardinality Yield Cardinality 7500; Density 1% Dimension Yield Cardinality 7500; Dimension Density Figure 2. Yield of Dominator. causing computational effort to increase with the number of dimensions without additional classifications, which affects adversely the yield. The results of our experiments make it difficult to understand the impact of density on yield. Dominator s yield decreased as density increased from 1% to 13% to 25% when there were five dimensions. With the other dimensions, the yield increases practically always as density goes from 1% to 13% but then decreases when density changes from 13% to 25%. Lower yields with 1% density than with 13% squares with the expectation of a greater probability, in the latter case, of finding a dominating DMU for each dominated one during the procedure. One might also expect, however the following effects to start to prevail and reduce yields as density increases: 1) more DMUs are efficient and therefore undominated; and 2) there is an erosion of the effectiveness of the enhancement since fewer dominated DMUs are removed from the analysis as the procedure progresses. The effect is made dramatic in the limit since a density of 100% would result in a zero yield. This suggests a relation with density where yields initially increase due to the impact of extra efficient DMUs that dominate others but eventually starts to decrease due to the effect of fewer available dominated entities and less advantage from the enhancement. The results of the implementation of HyperTran are illustrated using Figure 3. These graphs were selected to represent what was typically observed. HyperTran also seems to respond to cardinality and dimension more predictably than density. Increases in cardinality generate more work for the procedure without necessarily increasing the number of DMUs classified. This means we can expect yields to be adversely affected by this attribute and this is confirmed in our tests. The apparent adverse effect of increases in dimension on the yield can be explained by the increase in the amount of work in the calculation of inner products. These experiments do not allow any useful determination about the impact of density on HyperTran s yield. The procedure may be

19 Preprocessing DEA. Page 19 Yield Dimension 15; Density 13% 30 Yield Cardinality 10000; Density 25% 25 Yield 25 Cardinality 7500; Dimension Cardinality Dimension Density Figure 3. Yield of HyperTran. sensitive to the geometry and scaling of the data. Translating hyperplanes would tend to end up at points with the more extreme magnitude values in dimensions were the attribute units are large. Also, extreme points may have point clouds nearby that would tend to attract a disproportionate number of hyperplanes to themselves. Figure 4 depicts three typical situations in our experience with HyperTwist. We can see that increasing cardinality tends to decrease yield. The reason would be the same as with HyperTran; that is, the number of inner products grows with the number of DMUs without necessarily a proportional identification of efficient points. The procedure s yield is also adversely affected by dimension. Increasing the dimension increases the number of passes through the main loop of the procedure and the inner products have more components. The yield of HyperTwist when density increases tends to increase slightly or remains more or less constant. This may sound counter intuitive since one would expect HyperTwist to encounter more extreme points in a denser environment, which should result in noticeable yield improvements. Higher density may result in more classifications but this is counteracted by the increase in pivots from the additional extreme points. HyperTran and HyperTwist are the two most complex procedures studied here and have similar functionality the identification of efficient DMUs. Both procedures are based on the same supporting hyperplane principle that their support set is composed of boundary points. Because of this, it is appropriate to compare the two. Hyperplane Translation and HyperTwist. A comparison between HyperTran and HyperTwist is illustrated in Figure 5. As noted above, the yield of both procedures is adversely impacted by cardinality and dimension. Higher densities have a slight positive effect on HyperTwist but its

20 Page 20 Dulá &López Yield Dimension 10; Density 25% Cardinality Yield Cardinality 10000; Density 25% Dimension Yield Cardinality 7500; Dimension Density Figure 4. Yield of HyperTwist. effect on HyperTran is not as clear. As shown in Figure 5 and what is true in general is that HyperTwist is a more efficient preprocessor, with yields frequently one order of magnitude greater or better than those of HyperTran. We finish this section reporting the performance of Dominator, HyperTran, and HyperTwist on data from the Federal Financial Institutions Examination Council [13]. Using these data we built three problems. The first one contains 4,971 DMUs, three inputs, and four outputs; the second has 12,456 DMUs, five inputs, and three outputs; and the third includes 19,939 DMUs, six inputs, and five outputs. The yield of the procedures along that of the LP approach, for contrast, is reported in Table 2: Table 2. Preprocessors (and LPs) Yield on bank data. Problem Card Dim Dominator HyperTran HyperTwist LPs These data display the negative effects on yields of increases in cardinality and dimension observed in the synthetic data sets. HyperTwist continues to outperform HyperTran on these realworld data. It is important to remember that these preprocessors are also useful when dealing with polyhedral sets different from the DEA VRS production possibility set.

21 Preprocessing DEA. Page 21 Yield Dimension 20; Density 13% HyperTwist HyperTran Cardinality Yield Yield Cardinality 10000; Density 1% Cardinality 7500; Dimension HyperTwist HyperTran Dimension HyperTwist HyperTran Density Figure 5. Comparison between HyperTran and HyperTwist. 7. Concluding remarks. Preprocessors are an important aspect in the development of computational tools in many areas of OR/MS, especially when speed is critical in large scale applications. Today, these types of approaches speed-up linear programming, integer programming, and countless specialized procedures developed for optimization and combinatorics, to make them better able to cope with bigger and more complex problems. The contributions of this paper are to collect, organize, analyze, implement, test, and compare different preprocessors to classify DEA data points as efficient and inefficient. We introduce three new preprocessors to DEA: one based on norm maximization; another on hyperplane translation, HyperTran; and the third on hyperplane rotation, HyperTwist. We designed a total of five preprocessing methods for the DEA variable returns to scale model. Computational results show that the preprocessor to identify inefficient DMUs based on testing for domination, Dominator, is highly effective. Two other preprocessors, HyperTwist and HyperTran, both based on the principle that supporting hyperplanes identify efficient entities, produce excellent results with HyperTwist consistently the better of the pair. Testing compared yields, defined to be the number of DMUs classified as efficient or inefficient per cpu time unit (tenth of a second). Testing shows that the yield of preprocessors usually decreases when cardinality or dimension increases. The impact of changes on density, defined as the percentage of points that are efficient, is not as clear, but it appears that yield tends to decrease as density increases with Dominator and has little impact on HyperTwist. It is clear, though, that the computational cost of identifying efficient or inefficient DMUs with preprocessors is cheap. The effectiveness of preprocessors stems from the fact that they do not solve LPs and only conduct simple computations such as sortings, calculating inner products, and ratios. Preprocessors should be a part of the DEA analyst s toolbox especially when working with large data sets.

22 Page 22 Dulá &López References. [1] Charnes, A., W.W. Cooper, and E. Rhodes, Measuring the efficiency of decision making units, European Journal of Operational Research, Vol. 2, 1978 No. 6, pp [2] Sueyoshi, T. and Y-L Chang, Efficient algorithm for additive and multiplicative models in Data Envelopment Analysis, Operations Research Letters, Vol. 8, 1989, pp [3] Sueyoshi, T., A special algorithm for an additive model in Data Envelopment Analysis, Journal of the Operational Research Society, Vol. 3, 1990, pp [4] Ali, A.I., Streamlined computation for data envelopment analysis, European Journal of Operational Research, Vol. 64, 1993, pp [5] Charnes, A., W.W. Cooper, B. Golany, L. Seiford, and J. Stutz, Foundations of data envelopment analysis for Pareto-Koopmans efficient empirical production functions, Journal of Econometrics, Vol. 30, 1985, pp [6] Briec, W., Hölder Distance Function and Measurement of Technical Efficiency, Journal of Productivity Analysis, Vol. 11, 1998, pp [7] Barr, R.S. and M.L. Durchholz, Parallel and hierarchical decomposition approaches for solving large-scale Data Envelopment Analysis models, Annals of Operations Research, Vol. 73, 1997, pp [8] Dulá, J.H., A computational study with DEA with massive data sets. Computers and Operations Research Res, in print. [9] Dulá, J.H. and F.J. López, Algorithms for the frame of a finitely generated unbounded polyhedron, INFORMS Journal on Computing Vol 18, 2006, pp [10] Dulá, J.H., R.V. Helgason, B.L. Hickman, Preprocessing schemes and a solution method for the convex hull problem in multidimensional space, Computer Science and Operations Research: New Developments in Their Interfaces, O. Balci (ed.), pp , Pergamon Press, U.K., [11] Shaheen, M., Frame of a Pointed Finite Polyhedral Cone, Thesis for Master of Science, Department of Economics, Mathematics, and Statistics at the University of Windsor, 2000, Windsor, Ontario, Canada. [12] López, F.J. and J.H. Dulá, Adding and removing an attribute in a DEA model: theory and processing, under review.

23 Preprocessing DEA. Page 23 [13] Federal Financial Institutions Examination Council (FFIEC), 2004 Report of Condition and Income, research and data/weekly report of assets and liabilities.cfm. [14] Rockafellar, R.T., Convex Analysis, Princeton University Press, 1970.

24 Page 24 Dulá &López APPENDIX 1 Results and Proofs. Result 1. Let aĵ = argmax { ˆp a j l ; j =1,...,n } where l is the l-norm of the argument. If aĵ is unique then it is an extreme point of the convex hull of A. Proof. Set ˆp aĵ l = ˆβ and define B(ˆp, ˆβ) ={z : ˆp z l ˆβ}. B(ˆp, ˆβ) is the l-ball centered at ˆp with radius ˆβ. Two properties of B(ˆp, ˆβ) are relevant: 1) it is convex (see [14], pp ); and 2) the elements of A are in its strict interior except for aĵ which is on the boundary. Therefore, the convex hull of A is contained in B(ˆp, ˆβ) and there exists a supporting hyperplane for the l-ball at aĵ. This hyperplane also supports the convex hull but only at aĵ. This is enough to conclude that aĵ is an extreme point of the convex hull. Result 2. An inefficient DMU for the VRS production possibility set cannot maximize the 2-norm when the focal point is ˆp i = min{a j i ; j =1,...,n}; i. Proof. Any inefficient DMU (including weak efficient), ã, can be expressed as ã = ā + v where ā is expressed as a convex combimation of the extreme points of the VRS production possibility set and v 0 is a direction in the recession cone defined by a positive combination of the directions e i ; i =1,...,m. Note that ˆp i ã i ā i ; i =1,...,m and for some i,ã i ã ˆp 2 < â ˆp 2. < ā i. Therefore

25 Preprocessing DEA. Page 25 APPENDIX 2 HyperTwist: A Preprocessor Based on Hyperplane Rotation. Derivation. Without loss of generality (see Note 1 below) and to simplify notation we develop this derivation in terms of the m-th dimension. Consider an extreme-efficient DMU a j R m obtained [ ] by π maximizing the translation of a hyperplane with normal (parameterized) vector ˆπ(γ) = ;0< γ π R m 1 ;0 γ R. A supporting hyperplane in R m, H(ˆπ( γ), ˆβ) for the VRS production possibility set at a j is such that ˆπ(0),a j + γa j m = ˆβ (1) ˆπ(0),a j + γa j m ˆβ; j =1,...,n. (2) Any rotation of this hyperplane with respect to the mth axis at the point a j, such that the hyperplane remains a support, has the form ˆπ(0),a j + γ a j m = β (3) ˆπ(0),a j + γ a j m β ; j =1,...,n. (4) where β and γ are controllable parameters (although not necessarily completely free). It follows from (3) and (4) that ˆπ(0),a j + γ a j m ˆπ(0),a j + γ a j m; j =1,...,n. Solving for γ, when (a j m a j m) > 0, we obtain: γ ˆπ(0),a j ˆπ(0),aj (a j. (5) m a j m) The maximum rotation occurs at a point a j a j where γ equals the right-hand side in (5), so long as (5) holds for every point a j Afor which (a j m a j m) > 0. If there does not exist such a point, the maximum rotation occurs when the hyperplane supports the polyhedral set at a face

26 Page 26 Dulá &López orthogonal to the m-th axis defined by one or more directions of recession. The second parameter, β, is now uniquely specified by (3). The new hyperplane, H(ˆπ(γ ),β ) supports the production possibility set at both a j and a j. Notes. 1. Our assumptions about the data mean that the recession of the VRS production possibility set is always the negative orthant independent of the input/output assignments of the attributes. For this reason, working in a dimension that corresponds to an input or an output does not make any difference for the purpose of our development and procedure HyperTwist. 2. If the denominator (a j m a j m) > 0 in (5) then the numerator is strictly positive, assuring γ > 0. It cannot be negative because the hyperplane H(ˆπ( γ), ˆβ) supports the production possibility set at a j by construction. It cannot be zero and satisfy the condition on the denominator since this would mean that a j belongs to this hyperplane, which is impossible since here the point with the largest m-th component in this hyperplane, a j, is selected. 3. The point a j will be the one with the largest m-th component and will serve as the hinge for the next twist of the supporting hyperplane. In case of ties, any one of them can be a hinge, possibly leading to different paths. In procedure HyperTwist we require the identification of one extreme point to proceed.

27 Preprocessing DEA. Page 27 APPENDIX 3 Yield of Preprocessors. Dimension Cardinality Density HyperTran HyperTwist Dominator % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %

28 Page 28 Dulá &López Yield of Preprocessors (Cont d). Dimension Cardinality Density HyperTran HyperTwist Dominator % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

More information

IDENTIFICATION AND ELIMINATION OF INTERIOR POINTS FOR THE MINIMUM ENCLOSING BALL PROBLEM

IDENTIFICATION AND ELIMINATION OF INTERIOR POINTS FOR THE MINIMUM ENCLOSING BALL PROBLEM IDENTIFICATION AND ELIMINATION OF INTERIOR POINTS FOR THE MINIMUM ENCLOSING BALL PROBLEM S. DAMLA AHIPAŞAOĞLU AND E. ALPER Yıldırım Abstract. Given A := {a 1,..., a m } R n, we consider the problem of

More information

Some Advanced Topics in Linear Programming

Some Advanced Topics in Linear Programming Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,

More information

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

Graphs that have the feasible bases of a given linear

Graphs that have the feasible bases of a given linear Algorithmic Operations Research Vol.1 (2006) 46 51 Simplex Adjacency Graphs in Linear Optimization Gerard Sierksma and Gert A. Tijssen University of Groningen, Faculty of Economics, P.O. Box 800, 9700

More information

However, this is not always true! For example, this fails if both A and B are closed and unbounded (find an example).

However, this is not always true! For example, this fails if both A and B are closed and unbounded (find an example). 98 CHAPTER 3. PROPERTIES OF CONVEX SETS: A GLIMPSE 3.2 Separation Theorems It seems intuitively rather obvious that if A and B are two nonempty disjoint convex sets in A 2, then there is a line, H, separating

More information

16.410/413 Principles of Autonomy and Decision Making

16.410/413 Principles of Autonomy and Decision Making 16.410/413 Principles of Autonomy and Decision Making Lecture 17: The Simplex Method Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 10, 2010 Frazzoli (MIT)

More information

Lecture 5: Duality Theory

Lecture 5: Duality Theory Lecture 5: Duality Theory Rajat Mittal IIT Kanpur The objective of this lecture note will be to learn duality theory of linear programming. We are planning to answer following questions. What are hyperplane

More information

CS 229 Midterm Review

CS 229 Midterm Review CS 229 Midterm Review Course Staff Fall 2018 11/2/2018 Outline Today: SVMs Kernels Tree Ensembles EM Algorithm / Mixture Models [ Focus on building intuition, less so on solving specific problems. Ask

More information

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh Lecture 3 Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets Gérard Cornuéjols Tepper School of Business Carnegie Mellon University, Pittsburgh January 2016 Mixed Integer Linear Programming

More information

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 2 Review Dr. Ted Ralphs IE316 Quiz 2 Review 1 Reading for The Quiz Material covered in detail in lecture Bertsimas 4.1-4.5, 4.8, 5.1-5.5, 6.1-6.3 Material

More information

Lecture 2 - Introduction to Polytopes

Lecture 2 - Introduction to Polytopes Lecture 2 - Introduction to Polytopes Optimization and Approximation - ENS M1 Nicolas Bousquet 1 Reminder of Linear Algebra definitions Let x 1,..., x m be points in R n and λ 1,..., λ m be real numbers.

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 29 CS 473: Algorithms, Spring 2018 Simplex and LP Duality Lecture 19 March 29, 2018

More information

Convex Geometry arising in Optimization

Convex Geometry arising in Optimization Convex Geometry arising in Optimization Jesús A. De Loera University of California, Davis Berlin Mathematical School Summer 2015 WHAT IS THIS COURSE ABOUT? Combinatorial Convexity and Optimization PLAN

More information

Triangle in a brick. Department of Geometry, Budapest University of Technology and Economics, H-1521 Budapest, Hungary. September 15, 2010

Triangle in a brick. Department of Geometry, Budapest University of Technology and Economics, H-1521 Budapest, Hungary. September 15, 2010 Triangle in a brick Á.G.Horváth Department of Geometry, udapest University of Technology Economics, H-151 udapest, Hungary September 15, 010 bstract In this paper we shall investigate the following problem:

More information

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES DESIGN AND ANALYSIS OF ALGORITHMS Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES http://milanvachhani.blogspot.in USE OF LOOPS As we break down algorithm into sub-algorithms, sooner or later we shall

More information

Coloring 3-Colorable Graphs

Coloring 3-Colorable Graphs Coloring -Colorable Graphs Charles Jin April, 015 1 Introduction Graph coloring in general is an etremely easy-to-understand yet powerful tool. It has wide-ranging applications from register allocation

More information

Integer Programming ISE 418. Lecture 1. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 1. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 1 Dr. Ted Ralphs ISE 418 Lecture 1 1 Reading for This Lecture N&W Sections I.1.1-I.1.4 Wolsey Chapter 1 CCZ Chapter 2 ISE 418 Lecture 1 2 Mathematical Optimization Problems

More information

Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES DESIGN AND ANALYSIS OF ALGORITHMS Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES http://milanvachhani.blogspot.in USE OF LOOPS As we break down algorithm into sub-algorithms, sooner or later we shall

More information

Algorithms for the frame of a finitely generated unbounded polyhedron.

Algorithms for the frame of a finitely generated unbounded polyhedron. Algorithms for the frame of a finitely generated unbounded polyhedron. J.H. Dulá 1, The University of Mississippi, University, MS 38677, and F.J. López, The University of Texas at El Paso, El Paso, TX

More information

Preferred directions for resolving the non-uniqueness of Delaunay triangulations

Preferred directions for resolving the non-uniqueness of Delaunay triangulations Preferred directions for resolving the non-uniqueness of Delaunay triangulations Christopher Dyken and Michael S. Floater Abstract: This note proposes a simple rule to determine a unique triangulation

More information

Lecture 5: Linear Classification

Lecture 5: Linear Classification Lecture 5: Linear Classification CS 194-10, Fall 2011 Laurent El Ghaoui EECS Department UC Berkeley September 8, 2011 Outline Outline Data We are given a training data set: Feature vectors: data points

More information

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008 LP-Modelling dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven January 30, 2008 1 Linear and Integer Programming After a brief check with the backgrounds of the participants it seems that the following

More information

A New Online Clustering Approach for Data in Arbitrary Shaped Clusters

A New Online Clustering Approach for Data in Arbitrary Shaped Clusters A New Online Clustering Approach for Data in Arbitrary Shaped Clusters Richard Hyde, Plamen Angelov Data Science Group, School of Computing and Communications Lancaster University Lancaster, LA1 4WA, UK

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

Probabilistic Graphical Models

Probabilistic Graphical Models School of Computer Science Probabilistic Graphical Models Theory of Variational Inference: Inner and Outer Approximation Eric Xing Lecture 14, February 29, 2016 Reading: W & J Book Chapters Eric Xing @

More information

Hierarchical Intelligent Cuttings: A Dynamic Multi-dimensional Packet Classification Algorithm

Hierarchical Intelligent Cuttings: A Dynamic Multi-dimensional Packet Classification Algorithm 161 CHAPTER 5 Hierarchical Intelligent Cuttings: A Dynamic Multi-dimensional Packet Classification Algorithm 1 Introduction We saw in the previous chapter that real-life classifiers exhibit structure and

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

Chapter 4 Concepts from Geometry

Chapter 4 Concepts from Geometry Chapter 4 Concepts from Geometry An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Line Segments The line segment between two points and in R n is the set of points on the straight line joining

More information

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm Instructor: Shaddin Dughmi Algorithms for Convex Optimization We will look at 2 algorithms in detail: Simplex and Ellipsoid.

More information

Worst-case running time for RANDOMIZED-SELECT

Worst-case running time for RANDOMIZED-SELECT Worst-case running time for RANDOMIZED-SELECT is ), even to nd the minimum The algorithm has a linear expected running time, though, and because it is randomized, no particular input elicits the worst-case

More information

I How does the formulation (5) serve the purpose of the composite parameterization

I How does the formulation (5) serve the purpose of the composite parameterization Supplemental Material to Identifying Alzheimer s Disease-Related Brain Regions from Multi-Modality Neuroimaging Data using Sparse Composite Linear Discrimination Analysis I How does the formulation (5)

More information

Distributed minimum spanning tree problem

Distributed minimum spanning tree problem Distributed minimum spanning tree problem Juho-Kustaa Kangas 24th November 2012 Abstract Given a connected weighted undirected graph, the minimum spanning tree problem asks for a spanning subtree with

More information

Linear Programming in Small Dimensions

Linear Programming in Small Dimensions Linear Programming in Small Dimensions Lekcija 7 sergio.cabello@fmf.uni-lj.si FMF Univerza v Ljubljani Edited from slides by Antoine Vigneron Outline linear programming, motivation and definition one dimensional

More information

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs Introduction to Mathematical Programming IE496 Final Review Dr. Ted Ralphs IE496 Final Review 1 Course Wrap-up: Chapter 2 In the introduction, we discussed the general framework of mathematical modeling

More information

The Geometry of Carpentry and Joinery

The Geometry of Carpentry and Joinery The Geometry of Carpentry and Joinery Pat Morin and Jason Morrison School of Computer Science, Carleton University, 115 Colonel By Drive Ottawa, Ontario, CANADA K1S 5B6 Abstract In this paper we propose

More information

be a polytope. has such a representation iff it contains the origin in its interior. For a generic, sort the inequalities so that

be a polytope. has such a representation iff it contains the origin in its interior. For a generic, sort the inequalities so that ( Shelling (Bruggesser-Mani 1971) and Ranking Let be a polytope. has such a representation iff it contains the origin in its interior. For a generic, sort the inequalities so that. a ranking of vertices

More information

Linear Programming Duality and Algorithms

Linear Programming Duality and Algorithms COMPSCI 330: Design and Analysis of Algorithms 4/5/2016 and 4/7/2016 Linear Programming Duality and Algorithms Lecturer: Debmalya Panigrahi Scribe: Tianqi Song 1 Overview In this lecture, we will cover

More information

Division of the Humanities and Social Sciences. Convex Analysis and Economic Theory Winter Separation theorems

Division of the Humanities and Social Sciences. Convex Analysis and Economic Theory Winter Separation theorems Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Topic 8: Separation theorems 8.1 Hyperplanes and half spaces Recall that a hyperplane in

More information

EXTREME POINTS AND AFFINE EQUIVALENCE

EXTREME POINTS AND AFFINE EQUIVALENCE EXTREME POINTS AND AFFINE EQUIVALENCE The purpose of this note is to use the notions of extreme points and affine transformations which are studied in the file affine-convex.pdf to prove that certain standard

More information

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if POLYHEDRAL GEOMETRY Mathematical Programming Niels Lauritzen 7.9.2007 Convex functions and sets Recall that a subset C R n is convex if {λx + (1 λ)y 0 λ 1} C for every x, y C and 0 λ 1. A function f :

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 16 Cutting Plane Algorithm We shall continue the discussion on integer programming,

More information

Clustering and Visualisation of Data

Clustering and Visualisation of Data Clustering and Visualisation of Data Hiroshi Shimodaira January-March 28 Cluster analysis aims to partition a data set into meaningful or useful groups, based on distances between data points. In some

More information

Chapter 2 Basic Structure of High-Dimensional Spaces

Chapter 2 Basic Structure of High-Dimensional Spaces Chapter 2 Basic Structure of High-Dimensional Spaces Data is naturally represented geometrically by associating each record with a point in the space spanned by the attributes. This idea, although simple,

More information

Lecture 3: Linear Classification

Lecture 3: Linear Classification Lecture 3: Linear Classification Roger Grosse 1 Introduction Last week, we saw an example of a learning task called regression. There, the goal was to predict a scalar-valued target from a set of features.

More information

Joint Entity Resolution

Joint Entity Resolution Joint Entity Resolution Steven Euijong Whang, Hector Garcia-Molina Computer Science Department, Stanford University 353 Serra Mall, Stanford, CA 94305, USA {swhang, hector}@cs.stanford.edu No Institute

More information

Lecture 2 September 3

Lecture 2 September 3 EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give

More information

Adaptive Estimation of Distributions using Exponential Sub-Families Alan Gous Stanford University December 1996 Abstract: An algorithm is presented wh

Adaptive Estimation of Distributions using Exponential Sub-Families Alan Gous Stanford University December 1996 Abstract: An algorithm is presented wh Adaptive Estimation of Distributions using Exponential Sub-Families Alan Gous Stanford University December 1996 Abstract: An algorithm is presented which, for a large-dimensional exponential family G,

More information

Character Recognition

Character Recognition Character Recognition 5.1 INTRODUCTION Recognition is one of the important steps in image processing. There are different methods such as Histogram method, Hough transformation, Neural computing approaches

More information

The Simplex Algorithm

The Simplex Algorithm The Simplex Algorithm Uri Feige November 2011 1 The simplex algorithm The simplex algorithm was designed by Danzig in 1947. This write-up presents the main ideas involved. It is a slight update (mostly

More information

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I. Instructor: Shaddin Dughmi

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I. Instructor: Shaddin Dughmi CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I Instructor: Shaddin Dughmi Announcements Posted solutions to HW1 Today: Combinatorial problems

More information

A PRIMAL-DUAL EXTERIOR POINT ALGORITHM FOR LINEAR PROGRAMMING PROBLEMS

A PRIMAL-DUAL EXTERIOR POINT ALGORITHM FOR LINEAR PROGRAMMING PROBLEMS Yugoslav Journal of Operations Research Vol 19 (2009), Number 1, 123-132 DOI:10.2298/YUJOR0901123S A PRIMAL-DUAL EXTERIOR POINT ALGORITHM FOR LINEAR PROGRAMMING PROBLEMS Nikolaos SAMARAS Angelo SIFELARAS

More information

Hausdorff Approximation of 3D Convex Polytopes

Hausdorff Approximation of 3D Convex Polytopes Hausdorff Approximation of 3D Convex Polytopes Mario A. Lopez Department of Mathematics University of Denver Denver, CO 80208, U.S.A. mlopez@cs.du.edu Shlomo Reisner Department of Mathematics University

More information

1. Lecture notes on bipartite matching

1. Lecture notes on bipartite matching Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans February 5, 2017 1. Lecture notes on bipartite matching Matching problems are among the fundamental problems in

More information

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

Linear programming and duality theory

Linear programming and duality theory Linear programming and duality theory Complements of Operations Research Giovanni Righini Linear Programming (LP) A linear program is defined by linear constraints, a linear objective function. Its variables

More information

Convex Sets (cont.) Convex Functions

Convex Sets (cont.) Convex Functions Convex Sets (cont.) Convex Functions Optimization - 10725 Carlos Guestrin Carnegie Mellon University February 27 th, 2008 1 Definitions of convex sets Convex v. Non-convex sets Line segment definition:

More information

A robust optimization based approach to the general solution of mp-milp problems

A robust optimization based approach to the general solution of mp-milp problems 21 st European Symposium on Computer Aided Process Engineering ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) 2011 Elsevier B.V. All rights reserved. A robust optimization based

More information

Maximal Monochromatic Geodesics in an Antipodal Coloring of Hypercube

Maximal Monochromatic Geodesics in an Antipodal Coloring of Hypercube Maximal Monochromatic Geodesics in an Antipodal Coloring of Hypercube Kavish Gandhi April 4, 2015 Abstract A geodesic in the hypercube is the shortest possible path between two vertices. Leader and Long

More information

Convex Optimization. Convex Sets. ENSAE: Optimisation 1/24

Convex Optimization. Convex Sets. ENSAE: Optimisation 1/24 Convex Optimization Convex Sets ENSAE: Optimisation 1/24 Today affine and convex sets some important examples operations that preserve convexity generalized inequalities separating and supporting hyperplanes

More information

AM 221: Advanced Optimization Spring 2016

AM 221: Advanced Optimization Spring 2016 AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 2 Wednesday, January 27th 1 Overview In our previous lecture we discussed several applications of optimization, introduced basic terminology,

More information

CHAPTER 4 VORONOI DIAGRAM BASED CLUSTERING ALGORITHMS

CHAPTER 4 VORONOI DIAGRAM BASED CLUSTERING ALGORITHMS CHAPTER 4 VORONOI DIAGRAM BASED CLUSTERING ALGORITHMS 4.1 Introduction Although MST-based clustering methods are effective for complex data, they require quadratic computational time which is high for

More information

Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret

Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret Greedy Algorithms (continued) The best known application where the greedy algorithm is optimal is surely

More information

Evaluation of Efficiency in DEA Models Using a Common Set of Weights

Evaluation of Efficiency in DEA Models Using a Common Set of Weights Evaluation of in DEA Models Using a Common Set of Weights Shinoy George 1, Sushama C M 2 Assistant Professor, Dept. of Mathematics, Federal Institute of Science and Technology, Angamaly, Kerala, India

More information

CHAPTER 3 A FAST K-MODES CLUSTERING ALGORITHM TO WAREHOUSE VERY LARGE HETEROGENEOUS MEDICAL DATABASES

CHAPTER 3 A FAST K-MODES CLUSTERING ALGORITHM TO WAREHOUSE VERY LARGE HETEROGENEOUS MEDICAL DATABASES 70 CHAPTER 3 A FAST K-MODES CLUSTERING ALGORITHM TO WAREHOUSE VERY LARGE HETEROGENEOUS MEDICAL DATABASES 3.1 INTRODUCTION In medical science, effective tools are essential to categorize and systematically

More information

8.B. The result of Regiomontanus on tetrahedra

8.B. The result of Regiomontanus on tetrahedra 8.B. The result of Regiomontanus on tetrahedra We have already mentioned that Plato s theory that the five regular polyhedra represent the fundamental elements of nature, and in supplement (3.D) to the

More information

Mathematical Programming and Research Methods (Part II)

Mathematical Programming and Research Methods (Part II) Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types

More information

Modified Model for Finding Unique Optimal Solution in Data Envelopment Analysis

Modified Model for Finding Unique Optimal Solution in Data Envelopment Analysis International Mathematical Forum, 3, 2008, no. 29, 1445-1450 Modified Model for Finding Unique Optimal Solution in Data Envelopment Analysis N. Shoja a, F. Hosseinzadeh Lotfi b1, G. R. Jahanshahloo c,

More information

Scan Scheduling Specification and Analysis

Scan Scheduling Specification and Analysis Scan Scheduling Specification and Analysis Bruno Dutertre System Design Laboratory SRI International Menlo Park, CA 94025 May 24, 2000 This work was partially funded by DARPA/AFRL under BAE System subcontract

More information

On Unbounded Tolerable Solution Sets

On Unbounded Tolerable Solution Sets Reliable Computing (2005) 11: 425 432 DOI: 10.1007/s11155-005-0049-9 c Springer 2005 On Unbounded Tolerable Solution Sets IRENE A. SHARAYA Institute of Computational Technologies, 6, Acad. Lavrentiev av.,

More information

General properties of staircase and convex dual feasible functions

General properties of staircase and convex dual feasible functions General properties of staircase and convex dual feasible functions JÜRGEN RIETZ, CLÁUDIO ALVES, J. M. VALÉRIO de CARVALHO Centro de Investigação Algoritmi da Universidade do Minho, Escola de Engenharia

More information

Convexity: an introduction

Convexity: an introduction Convexity: an introduction Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 74 1. Introduction 1. Introduction what is convexity where does it arise main concepts and

More information

Variable Selection 6.783, Biomedical Decision Support

Variable Selection 6.783, Biomedical Decision Support 6.783, Biomedical Decision Support (lrosasco@mit.edu) Department of Brain and Cognitive Science- MIT November 2, 2009 About this class Why selecting variables Approaches to variable selection Sparsity-based

More information

An Extension of the Multicut L-Shaped Method. INEN Large-Scale Stochastic Optimization Semester project. Svyatoslav Trukhanov

An Extension of the Multicut L-Shaped Method. INEN Large-Scale Stochastic Optimization Semester project. Svyatoslav Trukhanov An Extension of the Multicut L-Shaped Method INEN 698 - Large-Scale Stochastic Optimization Semester project Svyatoslav Trukhanov December 13, 2005 1 Contents 1 Introduction and Literature Review 3 2 Formal

More information

Efficient Optimal Linear Boosting of A Pair of Classifiers

Efficient Optimal Linear Boosting of A Pair of Classifiers Efficient Optimal Linear Boosting of A Pair of Classifiers Victor Boyarshinov Dept Computer Science Rensselaer Poly. Institute boyarv@cs.rpi.edu Malik Magdon-Ismail Dept Computer Science Rensselaer Poly.

More information

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 7 Dr. Ted Ralphs ISE 418 Lecture 7 1 Reading for This Lecture Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Wolsey Chapter 7 CCZ Chapter 1 Constraint

More information

Planar Graphs with Many Perfect Matchings and Forests

Planar Graphs with Many Perfect Matchings and Forests Planar Graphs with Many Perfect Matchings and Forests Michael Biro Abstract We determine the number of perfect matchings and forests in a family T r,3 of triangulated prism graphs. These results show that

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 50

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 50 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 50 CS 473: Algorithms, Spring 2018 Introduction to Linear Programming Lecture 18 March

More information

2. Convex sets. x 1. x 2. affine set: contains the line through any two distinct points in the set

2. Convex sets. x 1. x 2. affine set: contains the line through any two distinct points in the set 2. Convex sets Convex Optimization Boyd & Vandenberghe affine and convex sets some important examples operations that preserve convexity generalized inequalities separating and supporting hyperplanes dual

More information

Geometry. Every Simplicial Polytope with at Most d + 4 Vertices Is a Quotient of a Neighborly Polytope. U. H. Kortenkamp. 1.

Geometry. Every Simplicial Polytope with at Most d + 4 Vertices Is a Quotient of a Neighborly Polytope. U. H. Kortenkamp. 1. Discrete Comput Geom 18:455 462 (1997) Discrete & Computational Geometry 1997 Springer-Verlag New York Inc. Every Simplicial Polytope with at Most d + 4 Vertices Is a Quotient of a Neighborly Polytope

More information

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 The Encoding Complexity of Network Coding Michael Langberg, Member, IEEE, Alexander Sprintson, Member, IEEE, and Jehoshua Bruck,

More information

2. Convex sets. affine and convex sets. some important examples. operations that preserve convexity. generalized inequalities

2. Convex sets. affine and convex sets. some important examples. operations that preserve convexity. generalized inequalities 2. Convex sets Convex Optimization Boyd & Vandenberghe affine and convex sets some important examples operations that preserve convexity generalized inequalities separating and supporting hyperplanes dual

More information

Cluster Analysis. Prof. Thomas B. Fomby Department of Economics Southern Methodist University Dallas, TX April 2008 April 2010

Cluster Analysis. Prof. Thomas B. Fomby Department of Economics Southern Methodist University Dallas, TX April 2008 April 2010 Cluster Analysis Prof. Thomas B. Fomby Department of Economics Southern Methodist University Dallas, TX 7575 April 008 April 010 Cluster Analysis, sometimes called data segmentation or customer segmentation,

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Maximum Margin Methods Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB CSE 474/574

More information

6. Relational Algebra (Part II)

6. Relational Algebra (Part II) 6. Relational Algebra (Part II) 6.1. Introduction In the previous chapter, we introduced relational algebra as a fundamental model of relational database manipulation. In particular, we defined and discussed

More information

Linear programming II João Carlos Lourenço

Linear programming II João Carlos Lourenço Decision Support Models Linear programming II João Carlos Lourenço joao.lourenco@ist.utl.pt Academic year 2012/2013 Readings: Hillier, F.S., Lieberman, G.J., 2010. Introduction to Operations Research,

More information

NUMERICAL METHODS PERFORMANCE OPTIMIZATION IN ELECTROLYTES PROPERTIES MODELING

NUMERICAL METHODS PERFORMANCE OPTIMIZATION IN ELECTROLYTES PROPERTIES MODELING NUMERICAL METHODS PERFORMANCE OPTIMIZATION IN ELECTROLYTES PROPERTIES MODELING Dmitry Potapov National Research Nuclear University MEPHI, Russia, Moscow, Kashirskoe Highway, The European Laboratory for

More information

Pebble Sets in Convex Polygons

Pebble Sets in Convex Polygons 2 1 Pebble Sets in Convex Polygons Kevin Iga, Randall Maddox June 15, 2005 Abstract Lukács and András posed the problem of showing the existence of a set of n 2 points in the interior of a convex n-gon

More information

An Experiment in Visual Clustering Using Star Glyph Displays

An Experiment in Visual Clustering Using Star Glyph Displays An Experiment in Visual Clustering Using Star Glyph Displays by Hanna Kazhamiaka A Research Paper presented to the University of Waterloo in partial fulfillment of the requirements for the degree of Master

More information

Introduction to Modern Control Systems

Introduction to Modern Control Systems Introduction to Modern Control Systems Convex Optimization, Duality and Linear Matrix Inequalities Kostas Margellos University of Oxford AIMS CDT 2016-17 Introduction to Modern Control Systems November

More information

Integral Geometry and the Polynomial Hirsch Conjecture

Integral Geometry and the Polynomial Hirsch Conjecture Integral Geometry and the Polynomial Hirsch Conjecture Jonathan Kelner, MIT Partially based on joint work with Daniel Spielman Introduction n A lot of recent work on Polynomial Hirsch Conjecture has focused

More information

A Scalable Approach for Packet Classification Using Rule-Base Partition

A Scalable Approach for Packet Classification Using Rule-Base Partition CNIR Journal, Volume (5), Issue (1), Dec., 2005 A Scalable Approach for Packet Classification Using Rule-Base Partition Mr. S J Wagh 1 and Dr. T. R. Sontakke 2 [1] Assistant Professor in Information Technology,

More information

No-Arbitrage ROM Simulation

No-Arbitrage ROM Simulation Alois Geyer 1 Michael Hanke 2 Alex Weissensteiner 3 1 WU (Vienna University of Economics and Business) and Vienna Graduate School of Finance (VGSF) 2 Institute for Financial Services, University of Liechtenstein

More information

Overview of Clustering

Overview of Clustering based on Loïc Cerfs slides (UFMG) April 2017 UCBL LIRIS DM2L Example of applicative problem Student profiles Given the marks received by students for different courses, how to group the students so that

More information

Guidelines for the application of Data Envelopment Analysis to assess evolving software

Guidelines for the application of Data Envelopment Analysis to assess evolving software Short Paper Guidelines for the application of Data Envelopment Analysis to assess evolving software Alexander Chatzigeorgiou Department of Applied Informatics, University of Macedonia 546 Thessaloniki,

More information