USING PRINCIPAL COMPONENTS ANALYSIS FOR AGGREGATING JUDGMENTS IN THE ANALYTIC HIERARCHY PROCESS

Similar documents
PARAMETERS OF OPTIMUM HIERARCHY STRUCTURE IN AHP

Multi-Criteria Decision Making 1-AHP

A VALIDATION OF THE EFFECTIVENESS OF INNER DEPENDENCE IN AN ANP MODEL

A method to speedily pairwise compare in AHP and ANP

An expert module to improve the consistency of AHP matrices

Indirect Pairwise Comparison Method

Chapter 2 Basic Structure of High-Dimensional Spaces


DISSOLUTION OF THE DILEMMA OR CIRCULATION PROBLEM USING THE ANALYTIC NETWORK PROCESS

A Course in Machine Learning

Decision Processes in Public Organizations

Some words on the analytic hierarchy process and the provided ArcGIS extension ext_ahp

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1

Usability Evaluation of Software Testing Based on Analytic Hierarchy Process Dandan HE1, a, Can WANG2

Module 1 Introduction. IIT, Bombay

A NEW MULTI-CRITERIA EVALUATION MODEL BASED ON THE COMBINATION OF NON-ADDITIVE FUZZY AHP, CHOQUET INTEGRAL AND SUGENO λ-measure

Week 7 Picturing Network. Vahe and Bethany

"On Geometric Programming Problems Having Negative Degree of Difficulty"

Selection of Best Web Site by Applying COPRAS-G method Bindu Madhuri.Ch #1, Anand Chandulal.J #2, Padmaja.M #3

Modelling and Visualization of High Dimensional Data. Sample Examination Paper

Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma

PRODUCT DESIGN AND PROCESS SELECTION - ECONOMIC ANALYSIS

1 More configuration model

Understanding Clustering Supervising the unsupervised

Research on Risk Element Transmission of Enterprise Project Evaluation Chain Based on Trapezoidal Fuzzy Number FAHP

CSE 255 Lecture 5. Data Mining and Predictive Analytics. Dimensionality Reduction

FFRDC Team s Expert Elicitation

Deriving priorities from fuzzy pairwise comparison judgements

Linear Methods for Regression and Shrinkage Methods

CSE 258 Lecture 5. Web Mining and Recommender Systems. Dimensionality Reduction

TELCOM2125: Network Science and Analysis

EFFICIENT REPRESENTATION OF LIGHTING PATTERNS FOR IMAGE-BASED RELIGHTING

DECISION SUPPORT BASED ON THE INTERVAL RELATION

Artificial Neural Network and Multi-Response Optimization in Reliability Measurement Approximation and Redundancy Allocation Problem

Clustering and Visualisation of Data

Evaluation of texture features for image segmentation

Feature selection. Term 2011/2012 LSI - FIB. Javier Béjar cbea (LSI - FIB) Feature selection Term 2011/ / 22

Final Project. Professor : Hsueh-Wen Tseng Reporter : Bo-Han Wu

Unit.9 Integer Programming

Data Mining Chapter 3: Visualizing and Exploring Data Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

SELECTION OF AGRICULTURAL AIRCRAFT USING AHP AND TOPSIS METHODS IN FUZZY ENVIRONMENT

Lecture 07 Dimensionality Reduction with PCA

PatternRank: A Software-Pattern Search System Based on Mutual Reference Importance

ECONOMIC DESIGN OF STATISTICAL PROCESS CONTROL USING PRINCIPAL COMPONENTS ANALYSIS AND THE SIMPLICIAL DEPTH RANK CONTROL CHART

Clustering. Informal goal. General types of clustering. Applications: Clustering in information search and analysis. Example applications in search

Lecture 5: Graphs. Rajat Mittal. IIT Kanpur

Incompatibility Dimensions and Integration of Atomic Commit Protocols

( ) =cov X Y = W PRINCIPAL COMPONENT ANALYSIS. Eigenvectors of the covariance matrix are the principal components

Remote Sensing and GIS. GIS Spatial Overlay Analysis

General Instructions. Questions

Image Processing. Image Features

MULTIVARIATE TEXTURE DISCRIMINATION USING A PRINCIPAL GEODESIC CLASSIFIER

PRIORITIZATION OF WIRE EDM RESPONSE PARAMETERS USING ANALYTICAL NETWORK PROCESS

Principal Component Image Interpretation A Logical and Statistical Approach

Scaling Techniques in Political Science

Biology Project 1

The Use of Biplot Analysis and Euclidean Distance with Procrustes Measure for Outliers Detection

Lecture 8 Object Descriptors

TRANSFORM FEATURES FOR TEXTURE CLASSIFICATION AND DISCRIMINATION IN LARGE IMAGE DATABASES

Getting to Know Your Data

Conceptual Design Selection of Manual Wheelchair for Elderly by Analytical Hierarchy Process (AHP) Method: A Case Study

Machine Learning. Topic 5: Linear Discriminants. Bryan Pardo, EECS 349 Machine Learning, 2013

Obtaining a Feasible Geometric Programming Primal Solution, Given a Near-Optimal Dual Solution

Chapter 6 Multicriteria Decision Making

Genetic Algorithms Applied to Inconsistent Matrices Correction in the Analytic Hierarchy Process (AHP)

Estimation of Unknown Parameters in Dynamic Models Using the Method of Simulated Moments (MSM)

The Systems Engineering Tool Box

Discrete Optimization. Lecture Notes 2

Algebraic Graph Theory- Adjacency Matrix and Spectrum

Chapter 4: Implicit Error Detection

Multivariate Methods

17. SEISMIC ANALYSIS MODELING TO SATISFY BUILDING CODES

Multiple View Geometry in Computer Vision

CSE 255 Lecture 6. Data Mining and Predictive Analytics. Community Detection

Rank Measures for Ordering

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

Optimization with Multiple Objectives

Adaptive Estimation of Distributions using Exponential Sub-Families Alan Gous Stanford University December 1996 Abstract: An algorithm is presented wh

vector space retrieval many slides courtesy James Amherst

Unsupervised Learning

Advance Convergence Characteristic Based on Recycling Buffer Structure in Adaptive Transversal Filter

ELEMENTARY NUMBER THEORY AND METHODS OF PROOF

Applications of the extent analysis method on fuzzy AHP

SYDE Winter 2011 Introduction to Pattern Recognition. Clustering

You ve already read basics of simulation now I will be taking up method of simulation, that is Random Number Generation

A TOPSIS Method-based Approach to Machine Tool Selection

ELEMENTARY NUMBER THEORY AND METHODS OF PROOF

Managing Null Entries in Pairwise Comparisons

Lecture: Simulation. of Manufacturing Systems. Sivakumar AI. Simulation. SMA6304 M2 ---Factory Planning and scheduling. Simulation - A Predictive Tool

Reflected Number Systems

Rotation and Scaling Image Using PCA

Rank Similarity based MADM Method Selection

Advanced Search Techniques for Large Scale Data Analytics Pavel Zezula and Jan Sedmidubsky Masaryk University

Large-Scale Networks. PageRank. Dr Vincent Gramoli Lecturer School of Information Technologies

Machine Learning for Pre-emptive Identification of Performance Problems in UNIX Servers Helen Cunningham

Exponential Maps for Computer Vision

An Intelligent Clustering Algorithm for High Dimensional and Highly Overlapped Photo-Thermal Infrared Imaging Data

REGULAR GRAPHS OF GIVEN GIRTH. Contents

A TOPSIS Method-based Approach to Machine Tool Selection

Curriculum Map: Mathematics

Transcription:

Analytic Hierarchy To Be Submitted to the the Analytic Hierarchy 2014, Washington D.C., U.S.A. USING PRINCIPAL COMPONENTS ANALYSIS FOR AGGREGATING JUDGMENTS IN THE ANALYTIC HIERARCHY PROCESS Natalie M. Scala Department of e-business and Technology Management Towson University Towson, MD, USA E-mail: nscala@towson.edu Jayant Rajgopal Department of Industrial Engineering University of Pittsburgh Pittsburgh, PA, USA E-mail: rajgopal@pitt.edu Luis Vargas Katz Graduate School of Business University of Pittsburgh Pittsburgh, PA, USA E-mail: vargas@katz.pitt.edu Kim LaScola Needy Department of Industrial Engineering University of Arkansas Fayetteville, AR, USA E-mail: kneedy@uark.edu ABSTRACT Often it is appropriate to have more than one decision maker perform the pairwise comparisons that are part of (AHP). With group judgments one would hope for broad consensus among the decision makers, in which case one would aggregate judgments via their geometric mean. However, consensus may not always be reached and significant dispersion may exist among the judgments. The question arises as to what would be an appropriate aggregation scheme in such situations. Too much dispersion violates the principle of Pareto Optimality at the comparison and/or matrix levels, so that the group may be homogenous in some comparisons and heterogeneous in others. We propose a new aggregation method when the raw geometric mean cannot be used and the decision makers judgments cannot be revised. Our method makes use of principal components analysis (PCA) to combine the judgments into one aggregated value for each pairwise comparison. We show that this approach is equivalent to using a weighted geometric mean with the weights obtained from the PCA. 1 Washington, D.C.

Analytic Hierarchy To Be Submitted to the the Analytic Hierarchy 2014, Washington D.C., U.S.A. Keywords: group decisions, geometric mean, geometric dispersion. 1. Introduction This paper addresses aggregation of group judgments in (AHP). Often it is appropriate to have more than one decision maker perform the pairwise comparisons in the AHP analysis. This allows for multiple points of view and the knowledge of multiple subject matter experts to play a role in the final decision. These multiple decision makers can work separately, be spread out over multiple geographic locations, or they could be in a centralized setting. Traditionally, the geometric mean (GM) has been used for aggregating judgments of the group members. However, caution must be exercised because using the GM when excess dispersion exists around it can result in a violation of the principles on which the AHP is based. The research literature does not describe any accepted method for aggregation when the dispersion is large and the decision makers in the group are unwilling or unable to revise their pairwise comparisons. We address this situation by developing a new aggregation technique that makes use of principal components analysis (PCA) to combine judgments into one aggregated value for each pairwise comparison. It is shown that this is equivalent to using a weighted geometric mean with the weights obtained from the PCA. 2. Literature Review The best case in group decision making is broad consensus among the experts. In such cases the geometric mean of the pairwise comparisons is used; Aczél and Saaty (1983) showed that the GM is the only mathematically valid method for aggregating the pairwise comparisons. Where the judges are not equally important or reliable a weighted geometric mean (WGM) is appropriate (Aczél and Alsina, 1987). In practice, consensus among the judges might not always be reached. The concept of examining dispersion in group judgments is relatively new; Saaty and Vargas (2007) first introduced the notion of geometric dispersion, and show that the GM cannot be used for aggregation in those conditions. Too much dispersion around the GM of judgments violates the principle of Pareto Optimality at the comparison level and/or matrix level. If this happens, then the group may be homogenous in some comparisons and heterogeneous in others. The same authors developed a formal dispersion test for group judgment aggregation to check if variability around the geometric mean is small enough so that all comparisons remain homogeneous (Saaty and Vargas, 2007). The test is designed to determine if the observed variance in the set of group judgments for a given pairwise comparison is typical, given the group s behavior. When consensus is lacking and dispersion across judgments is too large, the literature (Aczél and Saaty, 1983; Aczél and Alsina, 1987; and Saaty and Vargas, 2007) directs the decision makers to reconsider their judgments to reach consensus. If the decision makers choose to revise their judgments and the new comparisons are reasonably consistent in the AHP, then another dispersion test should be performed. If the aggregated judgments pass that test, then the GM can be used to aggregate the judgments. In practice though, the process of returning to the decision makers might not be feasible from a logistical or geographical standpoint. Moreover, the decision makers may not want to revise their judgments, or the revised judgments might still not pass the dispersion test. In this case, 2 Washington, D. C.

Analytic Hierarchy To Be Submitted to the the Analytic Hierarchy 2014, Washington D.C., U.S.A. the AHP literature has not addressed, to our knowledge, the question of how one should proceed with group judgment aggregation. This research describes a new method for aggregating judgments when the dispersion is large and decision makers are unwilling or unable to revise their judgments. Our method is equivalent to using a weighted geometric mean with the weights computed via principal components analysis. The weighted geometric mean preserves the unanimity, homogeneity and the reciprocal properties of the AHP. It also serves the role of giving different weights to the different judges in the group, because they are presumably not all consistent to the same degree. This research also addresses a general need in the literature for guidance in determining how the decision makers weights for the WGM should be selected, regardless if the judgments do or do not pass the dispersion test. 3. Hypotheses/Objectives This paper develops a numerical simulation, with the dual objectives of understanding the limiting behavior of our PCA-based approach for aggregation as diversity of opinion among the decision makers decreases and demonstrating that it converges uniformly to aggregation by the GM. Doing so demonstrates that the PCA-based approach, which solves for weights for every decision maker, generalizes to a weighted geometric mean. Because the literature has shown that a weighted geometric mean is appropriate to use in judgment aggregation this approach is thus appropriate for group aggregation by finding weights for the decision makers that address the variability around the geometric mean. 4. Research Design/Methodology PCA is a statistical technique that uses an orthogonal linear transformation to transform a set of (most likely correlated) variables into a smaller set of uncorrelated variables (Dunteman, 1989). In essence, PCA attempts to reduce the dimensionality of a data set by distilling it down to the components that capture the majority of the variability associated with the original data set. If we start with a set of n observations in m variables, PCA reduces the original data set to n observations on k components that capture a large proportion of the total variance in the original data set, with the first principal component capturing the maximum variance. Considering group judgments in the context of the AHP, each of the n pairwise comparisons made by the decision makers in the group may be viewed as analogous to an observation in PCA, and each of the m decision makers may be viewed as a variable or as one dimension of the data set. Suppose decision maker k is assigned a weight w k where w k (0,1) and k=1..m (w k )=1. Further, let represent the value from Saaty s Fundamental Scale of Absolute Numbers (Saaty, 1980) chosen by decision maker k in comparing factor i with factor j. Let us also denote by a ij the weighted GM of these values across the k=1,,m judges obtained using weight w k for judge k. In our approach we first replace all original comparison values with their logarithms; it is readily apparent that the weighted arithmetic mean of the log( ) values is identical to the weighted GM of the and that it too maintains the reciprocal property. We compute the principal components for the original matrix of comparisons (in their logarithms), and 3 Washington, D. C.

Analytic Hierarchy To Be Submitted to the the Analytic Hierarchy 2014, Washington D.C., U.S.A. restrict ourselves to the first principal component, which is the eigenvector corresponding to the largest eigenvalue of the corresponding covariance matrix. By definition, this m- vector captures the majority of the variance among the different data dimensions, and we normalize and use this as the vector of weights to develop final aggregated values for the numerical comparisons, which in turn are then used to develop the final set of priorities in the AHP. Note that this does not involve any distributional assumptions so the approach may be used with any dispersed data sets. The full paper examines this approach in detail by studying how the weights behave with respect to the magnitude of the disagreement among the judges. It is shows that as the dispersions tend to zero, the weights from the PCA tend to equal values for all the judges, so that the weighted GM converges to the (raw) GM. This is demonstrated through an empirical study using numerical simulation. The results show that the final priority vector found by aggregating dispersed judgments using the PCA-based method is a generalization of the priority vector found when the judgments are aggregated using a traditional GM with non-dispersed decisions, with the appropriate aggregation of judgments that do not pass the dispersion test. Furthermore, because the PCA-approach generalizes to the traditional GM, it can also be used as an appropriate method to find weights for decision makers when a weighted geometric mean is desired. 5. Data/Model Analysis As an illustrative example, consider a situation where judgments made by m=4 different decision makers on f=3 three different factors (A, B and C). Note that there will be n=3 pairwise comparisons, and each comparison is assumed to take on a numerical value from the Fundamental Scale of Absolute Numbers (1/9, 1/7, 1/5, 1/3, 1, 3, 5, 7, 9). Suppose the resulting set of judgments is as shown as the comparison matrix A = [ ]. In this matrix, each column represents the judgments of each of the four decision makers. The first row is the comparison A vs. B, the second row is the comparison A vs. C, and the third row is the comparison B vs. C. Note there is a fair amount of dispersion among the judgments. When we conduct the dispersion test, the p- values for the three sets of comparisons by row are 0.141, 0.023 and 0.160, respectively. At the usual significance level of 5% this would indicate that the GM cannot be used in two of the three comparisons due to excessive dispersion among the decision makers. With the PCA approach, we start by defining the transformed matrix X, each entry of which is the logarithm of the corresponding entry in A. It may be verified that the first principal component (the unit eigenvector corresponding to the largest eigenvalue of the covariance matrix for X) is the vector [0.331155 0.626118 0.465358 0.530806] T. Note that this vector has its l-2 norm equal to 1; because the AHP approach uses the l-1 norm, we simply square each element (so that they sum to 1.0) to obtain a vector of final vector weights for each judge as w = [0.1097 0.3920 0.2166 0.2818] T. Thus the aggregated value for each comparison would be given by z =Xw = [1.5277-1.3918-0.7514] T. These weights ensure the judgments of the decision makers are homogeneous and pass the dispersion test, while incorporating the opinions of each decision maker and not just a select few. In order to arrive at the original scale, we now reverse the transformation by 4 Washington, D. C.

Analytic Hierarchy To Be Submitted to the the Analytic Hierarchy 2014, Washington D.C., U.S.A. exponentiating each element of z to obtain z = [4.6078 0.2486 0.4717], which can then be used to form the symmetric matrix for AHP priorities. The final priority vector is given by v P = [0.2942 0.1315 0.5743] T. 6. Limitations This research considers group aggregation in the AHP when the pairwise comparisons do not pass the dispersion test. However, it is an open question as to how many pairwise comparisons must fail the dispersion test before our proposed approach is necessary. While the results of the simulation developed in the full paper provide a guideline for this, we do not formally prove these cutoffs, as that work is under development. 7. Conclusions This research addresses the issue of synthesizing the AHP judgments of multiple decision makers when the judgments do not pass the dispersion test and the decision makers are unwilling or unable to revise their judgments. In these cases, significant diversity of opinion exists amongst the decision makers. Prior work on this topic has focused mainly on identifying such situations but provides no guidance on how to deal with this issue, other than asking the decision makers to reconsider their judgments. Simply discarding a judgment that is not consistent with the majority is also not a satisfactory solution because it defeats the purpose of obtaining a diverse set of perspectives. While judgments of all the decision makers should be considered, there is no reason in general why all decision makers need to be given the same degree of importance when synthesizing their judgments. This leads to the natural choice of a weighted GM, which preserves the properties that are critical to the AHP and provides for objective determination of weights. We propose an aggregation approach based on PCA that treats the judges as variables in a set of comparisons and uses the first principal component to arrive at weights for each decision maker. A detailed simulation provides guidance on when this approach might be preferred over a simple GM and shows that the final priority vector from this approach converges uniformly to the one obtained from a simple GM as the diversity of opinion among the judges goes to zero. 8. Key References Aczél, J., & Saaty, T. L. (1983). Procedures for synthesizing ratio judgements. Journal of Mathematical Psychology, 27, 93-102. Aczél, J., & Alsina, C. (1987). Synthesizing judgements: A functional equations approach. Mathematical Modelling, 9(3-5), 311-320. Dunteman, G. H. (1989). Principal components analysis. Newbury Park, California: Sage Publications. Saaty, T. L. (1980). The Analytic Hierarchy : Planning, priority setting, resource allocation. New York: McGraw-Hill. Saaty, T. L., & Vargas, L. G. (2007). Dispersion of group judgments. Mathematical and Computer Modelling, 46, 918-925. 5 Washington, D. C.