A Simulation Based Comparative Study of Normalization Procedures in Multiattribute Decision Making

Similar documents
Rank Similarity based MADM Method Selection

Multi-Criteria Decision Making 1-AHP

An algorithmic method to extend TOPSIS for decision-making problems with interval data

Weighting Selection in GRA-based MADM for Vertical Handover in Wireless Networks

SELECTION OF AGRICULTURAL AIRCRAFT USING AHP AND TOPSIS METHODS IN FUZZY ENVIRONMENT

PRIORITIZATION OF WIRE EDM RESPONSE PARAMETERS USING ANALYTICAL NETWORK PROCESS

ASIAN JOURNAL OF MANAGEMENT RESEARCH Online Open Access publishing platform for Management Research

Application of the Fuzzy AHP Technique for Prioritization of Requirements in Goal Oriented Requirements Elicitation Process

An expert module to improve the consistency of AHP matrices

Multiple Attributes Decision Making Approach by TOPSIS Technique

TOPSIS Modification with Interval Type-2 Fuzzy Numbers

Selection of Best Web Site by Applying COPRAS-G method Bindu Madhuri.Ch #1, Anand Chandulal.J #2, Padmaja.M #3

A TOPSIS Method-based Approach to Machine Tool Selection

A Comparison Between AHP and Hybrid AHP for Mobile Based Culinary Recommendation System

Development of Search Engines using Lucene: An Experience

Rapid Simultaneous Learning of Multiple Behaviours with a Mobile Robot

A Multicriteria Approach in the Selection of a SAP UI Technology

A NEW MULTI-CRITERIA EVALUATION MODEL BASED ON THE COMBINATION OF NON-ADDITIVE FUZZY AHP, CHOQUET INTEGRAL AND SUGENO λ-measure

A TOPSIS Method-based Approach to Machine Tool Selection

Use of Tree-based Algorithms for Internal Sorting

CHAPTER 3 MAINTENANCE STRATEGY SELECTION USING AHP AND FAHP

FEATURE EXTRACTION USING FUZZY RULE BASED SYSTEM

PARAMETERS OF OPTIMUM HIERARCHY STRUCTURE IN AHP

New Multi Access Selection Method Based on Mahalanobis Distance

Decision Support System Best Employee Assessments with Technique for Order of Preference by Similarity to Ideal Solution

Decision Processes in Public Organizations

Network Selection Decision Based on Handover History in Heterogeneous Wireless Networks

Some words on the analytic hierarchy process and the provided ArcGIS extension ext_ahp

VALIDATING AN ANALYTICAL APPROXIMATION THROUGH DISCRETE SIMULATION

A Linear Regression Model for Assessing the Ranking of Web Sites Based on Number of Visits

A Fuzzy Model for a Railway-Planning Problem

CREATION OF THE RATING OF STOCK MARKET ANALYTICAL SYSTEMS ON THE BASE OF EXPERT QUALITATIVE ESTIMATIONS

Extended TOPSIS model for solving multi-attribute decision making problems in engineering

Performance study of Gabor filters and Rotation Invariant Gabor filters

Unsupervised Learning. Presenter: Anil Sharma, PhD Scholar, IIIT-Delhi

Principal Component Analysis of Lack of Cohesion in Methods (LCOM) metrics

Controlling the spread of dynamic self-organising maps

Improving the Random Forest Algorithm by Randomly Varying the Size of the Bootstrap Samples for Low Dimensional Data Sets

Ranking Efficient Units in DEA. by Using TOPSIS Method

DOI /HORIZONS.B P38 UDC :519.8(497.6) COMBINED FUZZY AHP AND TOPSIS METHODFOR SOLVINGLOCATION PROBLEM 1

Chapter 6 Multicriteria Decision Making

Experimental investigation and analysis for selection of rapid prototyping processes

[2006] IEEE. Reprinted, with permission, from [Wenjing Jia, Gaussian Weighted Histogram Intersection for License Plate Classification, Pattern

A Random Number Based Method for Monte Carlo Integration

Estimating normal vectors and curvatures by centroid weights

New structural similarity measure for image comparison

Method and Algorithm for solving the Bicriterion Network Problem

ELIGERE: a Fuzzy AHP Distributed Software Platform for Group Decision Making in Engineering Design

CHAPTER 4 FUZZY LOGIC, K-MEANS, FUZZY C-MEANS AND BAYESIAN METHODS

Trees, Dendrograms and Sensitivity

Dynamic Clustering of Data with Modified K-Means Algorithm

A new approach for ranking trapezoidal vague numbers by using SAW method

Improving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries

CHAPTER 4 MAINTENANCE STRATEGY SELECTION USING TOPSIS AND FUZZY TOPSIS

Flexibility and Robustness of Hierarchical Fuzzy Signature Structures with Perturbed Input Data

Advanced Inference in Fuzzy Systems by Rule Base Compression

Evolutionary Decision Trees and Software Metrics for Module Defects Identification

Understanding Clustering Supervising the unsupervised

Pareto Recommendation via Graph-Based Modeling for Interactive Decision-Making

ECE 204 Numerical Methods for Computer Engineers MIDTERM EXAMINATION /4:30-6:00

Fuzzy bi-level linear programming problem using TOPSIS approach

Bootstrap Confidence Interval of the Difference Between Two Process Capability Indices

Relation Organization of SOM Initial Map by Improved Node Exchange

AN EXPLORATORY ANALYSIS OF PROCESSING STRATEGIES

A Project Network Protocol Based on Reliability Theory

2 The original active contour algorithm presented in [] had some inherent computational problems in evaluating the energy function, which were subsequ

Standard Error in Sampling

Extension of the TOPSIS method for decision-making problems with fuzzy data

A Comparative Study on AHP and FAHP for Consistent and Inconsistent Data

Spectral Images and the Retinex Model

Aggregation of Pentagonal Fuzzy Numbers with Ordered Weighted Averaging Operator based VIKOR

Proximity Prestige using Incremental Iteration in Page Rank Algorithm

Risk Factor Assessment of Software Usability Using Fuzzy-Analytic Hierarchy Process Method

SELECTION OF CREATIVE INDUSTRY SECTOR ICT SUITABLE DEVELOPED IN PESANTREN USING FUZZY - AHP

Lucas-Kanade Image Registration Using Camera Parameters

[Rao* et al., 5(9): September, 2016] ISSN: IC Value: 3.00 Impact Factor: 4.116

Free upgrade of computer power with Java, web-base technology and parallel computing

Network Routing Protocol using Genetic Algorithms

Towards Secured P2P Network: Degenerated Node Identification and Evaluation

OPTIMIZING PRODUCTION WORK FLOW USING OPEMCSS. John R. Clymer

Kutztown University Kutztown, Pennsylvania. MAT 550: Foundations of Geometry

INTEGER SEQUENCE WINDOW BASED RECONFIGURABLE FIR FILTERS.

Multiple Classifier Fusion using k-nearest Localized Templates

A Novel Approach for Minimum Spanning Tree Based Clustering Algorithm

IZAR THE CONCEPT OF UNIVERSAL MULTICRITERIA DECISION SUPPORT SYSTEM

On the Role of Weibull-type Distributions in NHPP-based Software Reliability Modeling

Fuzzy Ant Clustering by Centroid Positioning

AN EFFICIENT DESIGN OF VLSI ARCHITECTURE FOR FAULT DETECTION USING ORTHOGONAL LATIN SQUARES (OLS) CODES

A Study on Fuzzy AHP method and its applications in a tie-breaking procedure

Integration of Fuzzy Shannon s Entropy with fuzzy TOPSIS for industrial robotic system selection

ANALYTICAL STRUCTURES FOR FUZZY PID CONTROLLERS AND APPLICATIONS

Multi Attribute Decision Making Approach for Solving Intuitionistic Fuzzy Soft Matrix

Module 1 Introduction. IIT, Bombay

A Triangular Fuzzy Model for Assessing Problem Solving Skills

Fuzzy MADM Based Vertical Handover Algorithm for Enhancing Network Performances

A two-stage model for ranking DMUs using DEA/AHP

A FAST METHOD FOR SCALING COLOR IMAGES

A fuzzy k-modes algorithm for clustering categorical data. Citation IEEE Transactions on Fuzzy Systems, 1999, v. 7 n. 4, p.

Key words. Bit-Counting, Bit-Parallelism, Counters, Document Retrieval, Frequency Division, Performance, Popcount.

FACILITY LIFE-CYCLE COST ANALYSIS BASED ON FUZZY SETS THEORY Life-cycle cost analysis

Transcription:

Proceedings of the 6th WSEAS Int. Conf. on Artificial Intelligence, Knowledge Engineering and Data Bases, Corfu Island, Greece, February 16-19, 2007 102 A Simulation Based Comparative Study of Normalization Procedures in Multiattribute Decision Making SUBRATA CHAKRABORTY Student Member, IEEE Clayton School of Information Technology, Monash University, Clayton, Victoria 3800. AUSTRALIA. CHUNG-HSING YEH Senior Member, IEEE Clayton School of Information Technology, Monash University, Clayton, Victoria 3800. AUSTRALIA. Abstract: Normalization procedures are required in multiattribute decision making (MADM) to transform performance ratings with different data measurement units in a decision matri into a compatible unit. MADM methods generally use one particular normalization procedure without considering the suitability of other available procedures. This study compares four commonly known normalization procedures in terms of their ranking consistency and overall preference value consistency when used with the most widely used simple additive weight method. To achieve this, new performance measure indices are introduced and new simulation settings are devised for dealing with various measurement settings. A wide range of MADM problems with various measurement scales are generated by simulation for the comparison study. The eperiment result shows that vector normalization and linear scale transformation (the method) outperforms other normalization procedures when used with SAW. Key-Words: - MADM, SAW, Normalization, Decision making, Decision support systems, Simulation study, Method comparison, Decision consistency. 1 Introduction Multiattribute decision making (MADM) problems involve ranking or evaluating a finite number of alternatives with multiple, often conflicting, attributes. Various MADM methods have been developed to solve different problem settings. MADM methods have shown their suitability to particular decision problems. The sheer compleity in MADM problems and the need for a timely solution prompts to choose any MADM method suitable to a problem on the basis of eperience, knowledge and intuition. With quite a few available methods, it is etremely difficult to choose the best method that meets all the requirements. Several comparative studies on MADM methods have shown that certain methods are more suitable for specific decision settings as compared to other methods [2][7][12][13][15]. In MADM problems, each alternative has a performance rating for each attribute, which represents the characteristics of the alternative. It is common that performance ratings for different attribute are measured by different units. To transform performance ratings into a compatible measurement unit, normalization procedures are used. MADM methods often use one normalization procedure to achieve compatibility between different measurement units. For eample, SAW uses linear scale transformation ( method) [1][3][5][6][13][14], TOPSIS uses vector normalization procedure [13][14][16], ELECTRE uses vector normalization [4][14] and AHP uses linear scale transformation (sum method) [8][9][10][14]. Enormous efforts have been made to comparative studies of MADM methods, but no significant study is conducted on the suitability of normalization procedures used in those MADM methods. This leaves the effectiveness of various MADM methods in doubt and certainly raises the necessity to eamine the effects of various normalization procedures on decision outcome when used with given MADM methods.

Proceedings of the 6th WSEAS Int. Conf. on Artificial Intelligence, Knowledge Engineering and Data Bases, Corfu Island, Greece, February 16-19, 2007 103 It is thus the main purpose of this paper to find out the effect of four commonly used normalization procedures on decision outcomes of a given MADM method for the general MADM problem attribute measurement units are different. 2 MADM & Normalization Procedures An MADM problem usually involves a set of m alternatives A i (i = 1, 2,, m), which are to be evaluated based on a set of n attributes (evaluation criteria) C ( = 1, 2,, n). Assessments are to be made to determine (a) the weighting vector W = (w 1, w 2,, w,, w n ) and (b) the decision matri X = { i, i=1, 2,, m; =1, 2,, n}. The weighting vector W represents the relative importance of n attributes C (=1, 2,, n) for the problem. The decision matri X represents the performance ratings i of alternatives A i (i = 1, 2,, m) with respect to attributes C ( = 1, 2,, n). Given the weighting vector W and decision matri X, the obective is to rank or select the alternatives by giving each of them an overall preference value with respect to all attributes. MADM methods generally require two processes to obtain the overall preference value for each alternative (a) normalization and (b) aggregation. Normalization is first used to transform performance ratings to a compatible unit scale. An aggregation procedure is then used to combine normalized decision matri and attributes weight W to achieve an overall preference value for each alternative, on which the overall ranking of alternatives is based. To help present the comparative study, the four well known normalization procedures used in MADM are briefly described below, including: (a) vector normalization, (b) linear scale transformation (-min method), (c) linear scale transformation ( method) and (d) linear scale transformation (sum method). 2.1 Vector Normalization (N1) In this method, each performance rating of the decision matri is divided by its norm. The normalized value r i is obtained by i r = (1) i m i= 1 2 i This method has the advantage of converting all attributes into dimensionless measurement unit, thus making inter-attribute comparison easier. But it has the drawback of having non-equal scale length leading to difficulties in straightforward comparison [6] [14]. 2.2 Linear Scale Transformation, Ma-Min Method (N2) This method considers both the imum and minimum performance ratings of attributes during calculation. For benefit attributes, the normalized value r i is obtained by min i ri = (2) min For cost attributes, r i is computed as r i i = (3) min is the imum performance rating among alternatives for attribute C ( = 1, 2,, n) and min is the minimum performance rating among alternatives for attribute C ( = 1, 2,, n). This method has the advantage that the scale measurement is precisely between 0 and 1 for each attribute. The drawback is that the scale transformation is not proportional to outcome [6]. 2.3 Linear Scale Transformation, Ma Method (N3) This method divides the performance ratings of each attribute by the imum performance rating for that attribute. For benefit attributes, the normalized value r i is obtained by i r i = (4) For cost attributes, r i is computed as i ri = 1 (5) is the imum performance rating among alternatives for attribute C ( = 1, 2,, n). Advantage of this method is that outcomes are transformed in a linear way [6][14].

Proceedings of the 6th WSEAS Int. Conf. on Artificial Intelligence, Knowledge Engineering and Data Bases, Corfu Island, Greece, February 16-19, 2007 104 2.4 Linear Scale Transformation, Sum Method (N4) This method divides the performance ratings of each attribute by the sum of performance ratings for that attribute as follows i ri = n (6) = 1 is performance rating for each alternative for attribute C ( = 1, 2,, n) [14]. In order to obtain the overall preference value, the normalized decision matri generated by a normalization procedure needs to be aggregated by an MADM method. In this study, the simple additive weight (SAW) is used. 2.5 Simple Additive Weight (SAW) The SAW method, also known as the weighted sum method, is probably the best known and most widely used MADM method [6]. The basic logic of the SAW method is to obtain a weighted sum of the performance ratings of each alternative over all attributes. The overall preference value of each alternative is obtained by V i = n = 1 w r i ; i = 1, 2,, m. (7) Where V ( A i ) is the value function of alternative A i, w is weight attribute C and r i are normalized performance ratings [6][14]. 3 Eperiment and Performance Validation 3.1 Simulation Study Simulation based eperiments were conducted to provide results applicable to the general MADM problem rather than a particular MADM problem. In the simulation study random decision matrices with alternatives A i (i = 1, 2,, 4) and attributes C ( = 1, 2,, 4) were generated. Each decision matri was normalized by using each of four normalization procedures. Then SAW was used to generate an overall preference value for each alternative. To simplify the process without losing generality, all the attributes were assigned equal weights. To gain an unbiased result, the following settings were used in the eperiments: 1) 10000 non-dominant decision matrices were generated randomly for each simulation run. 2) For each data range, the process was repeated 10 times and average was noted in final result table. 3) The data ranges used for four attributes (C 1, C 2, C 3, C 4 ) were 1 10, 1 100, 1 1000, 1 10000 respectively, each of which used 10 increment steps, given below: Data range 1: [C 1 (1 10), C 2 (1 100), C 3 (1 1000), C 4 (1 10000)] Data range 2: [C 1 (1 10), C 2 (10 100), C 3 (100 1000), C 4 (1000 10000)] Data range 3: [C 1 (2 10), C 2 (20 100), C 3 (200 1000), C 4 (2000 10000)] Data range 4: [C 1 (3 10), C 2 (30 100), C 3 (300 1000), C 4 (3000 10000)] Data range 5: [C 1 (4 10), C 2 (40 100), C 3 (400 1000), C 4 (4000 10000)] Data range 6: [C 1 (5 10), C 2 (50 100), C 3 (500 1000), C 4 (5000 10000)] Data range 7: [C 1 (6 10), C 2 (60 100), C 3 (600 1000), C 4 (6000 10000)] Data range 8: [C 1 (7 10), C 2 (70 100), C 3 (700 1000), C 4 (7000 10000)] Data range 9: [C 1 (8 10), C 2 (80 100), C 3 (800 1000), C 4 (8000 10000)] Data range 10: [C 1 (9 10), C 2 (90 100), C 3 (900 1000), C 4 (9000 10000)] 3.2 Performance Measures 3.2.1 Ranking Consistency Ranking consistency is used to indicate how well a particular normalization procedure produces rankings similar to other procedures. To measure the ranking consistency inde (RCI) of a particular normalization procedure, the total number of times the procedure showed similarities/ dissimilarities in various etents with other procedures applied is calculated, over 10,000 simulation runs and then its ratio with total number of simulation runs is calculated. The higher the RCI, the better the procedure performs. In calculating RCI, a consistency weight (CW) is used as follows: 1) If a method is consistent with all 3 of other 3 methods, then CW = 3/3 = 1. 2) If a method is consistent with any 2 of other 3 methods, then CW = 2/3.

Proceedings of the 6th WSEAS Int. Conf. on Artificial Intelligence, Knowledge Engineering and Data Bases, Corfu Island, Greece, February 16-19, 2007 105 3) If a method is consistent with any 2 other methods, then CW = 1/3. 4) If a method is not consistent with any other methods, then CW = 0/3 = 0. The consistency inde of N1 is calculated as RCI (N1) = [(T 1234 * (CW=1) + T 123 * (CW=2/3) + T 124 * (CW=2/3) + T 134 * (CW=2/3) + T 12 * (CW=1/3) + T 13 * (CW=1/3) + T 14 * (CW=1/3) + TD 1234 * (CW=0)) / TS] (8) The consistency inde of N2 is calculated as RCI (N2) = [(T 1234 * (CW=1) + T 123 * (CW=2/3) + T 124 * (CW=2/3) + T 234 * (CW=2/3) + T 12 * (CW=1/3) + T 23 * (CW=1/3) + T 24 * (CW=1/3) + TD 1234 * (CW=0)) / TS] (9) The consistency inde of N3 is calculated as RCI (N3) = [(T 1234 * (CW=1) + T 123 * (CW=2/3) + T 134 * (CW=2/3) + T 234 * (CW=2/3) + T 13 * (CW=1/3) + T 23 * (CW=1/3) + T 34 * (CW=1/3) + TD 1234 * (CW=0)) / TS] (10) The consistency of N4 is calculated as RCI (N4) = [(T 1234 * (CW=1) + T 124 * (CW=2/3) + T 134 * (CW=2/3) + T 234 * (CW=2/3) + T 14 * (CW=1/3) + T 24 * (CW=1/3) + T 34 * (CW=1/3) + TD 1234 * (CW=0)) / TS] (11) RCI (X) = Ranking consistency inde for normalization procedure X. TS = Total number of times the simulation was run (10,000 in this eperiment). T 1234 = Total number of times N1, N2, N3 and N4 produced T 123 = Total number of times N1, N2 and N3 produced T 124 = Total number of times N1, N2 and N4 produced T 134 = Total number of times N1, N3 and N4 produced T 234 = Total number of times N2, N3 and N4 produced T 12 = Total number of times N1and N2 produced the T 13 = Total number of times N1and N3 produced the T 14 = Total number of times N1 and N produced the T 23 = Total number of times N2 and N3 produced the T 24 = Total number of times N2 and N4 produced the T 34 = Total number of times N3 and N4 produced the TD 1234 = Total number of times N1, N2, N3 and N4 produced different rank. 3.2.2 Overall Preference Value Consistency An overall preference value for each alternative is generated by aggregating the normalized performance ratings; using an MADM method (e.g. SAW in this eperiment). The overall preference value consistency is measured as the deviation in ratio between any pair of normalization procedures on the basis of their total positive distances in overall preference values produced by each procedure. Ideally the deviation is 0, so the method has a lower deviation is certainly a better one. Fig. 1 shows the overall preference value matri which is obtained by combining the overall preference value for each alternative for each normalization procedure. N 1 N 2 N m A 1 P 11 P 12 P 1m A 2 P 21 P 22 P 2m A 3 P 31 P 32 P 3m.... A n P n1 P n2 P nm Fig. 1 Overall preference value matri In Fig. 1, P i are the overall preference values for each alternative A i (i= 1, 2,, n), for each normalization procedure N (= 1, 2,, m). For simplicity and generality four alternatives (A1, A2, A3, A4) and four normalization procedures (N1, N2, N3, N4) has been used as an eample in the eperiments. The total positive distance between overall preference values for each normalization procedure can be calculated as TVD 1 = P 11 P 21 + P 11 P 31 + P 11 P 41 + P 21 P 31 + P 21 P 41 + P 31 P 41 (12) TVD2 = P 12 P 22 + P 12 P 32 + P 12 P 42 + P 22 P 32 + P 22 P 42 + P 32 P 42 (13) TVD3 = P 13 P 23 + P 13 P 33 + P 13 P 43 + P 23 P 33 + P 23 P 43 + P 33 P 43 (14)

Proceedings of the 6th WSEAS Int. Conf. on Artificial Intelligence, Knowledge Engineering and Data Bases, Corfu Island, Greece, February 16-19, 2007 106 TVD4 = P 14 P 24 + P 14 P 34 + P 14 P 44 + P 24 P 34 + P 24 P 44 + P 34 P 44 (15) TVD 1 = Total overall preference value distance for N1 TVD 2 = Total overall preference value distance for N2 TVD 3 = Total overall preference value distance for N3 TVD 4 = Total overall preference value distance for N4 The pair wise distance ratios between normalization procedures are calculated as RVD 12 = TVD1 / TVD2 (16) RVD13 = TVD1 / TVD3 (17) RVD14 = TVD1 / TVD4 (18) RVD23 = TVD2 / TVD3 (19) RVD24 = TVD2 / TVD4 (20) RVD34 = TVD3 / TVD4 (21) RVD 12 = Total Overall Preference Value Distance Ratio between N1 and N2 RVD 13 = Total Overall Preference Value Distance Ratio between N1 and N3 RVD 14 = Total Overall Preference Value Distance Ratio between N1 and N4 RVD 23 = Total Overall Preference Value Distance Ratio between N2 and N3 RVD 24 = Total Overall Preference Value Distance Ratio between N2 and N4 RVD 34 = Total Overall Preference Value Distance Ratio between N3 and N4 The deviation in pair wise ratios are calculated by DRVD 12 = Deviation in RVD between N1 and N2 DRVD 13 = Deviation in RVD between N1 and N3 DRVD 14 = Deviation in RVD between N1 and N4 DRVD 23 = Deviation in RVD between N2 and N3 DRVD 24 = Deviation in RVD between N2 and N4 DRVD 34 = Deviation in RVD between N3 and N4 The average deviation for each normalization procedure is calculated respectively as, AD (N1) = (DRVD 12 + DRVD 13 + DRVD 14 ) / 3 (28) AD (N2) = (DRVD 12 + DRVD 23 + DRVD 24 ) / 3 (29) AD (N3) = (DRVD 13 + DRVD 23 + DRVD 34 ) / 3 (30) AD (N4) = (DRVD 14 + DRVD 24 + DRVD 34 ) / 3 (31) AD (X) = Average Deviation for normalization procedure X Equations (8) (31) can be etended to accommodate any number of alternatives and any number of normalization procedures. 4 Eperiment Results and Analysis 4.1 Results for Ranking Consistency For each of the 10 data ranges, the simulation was run for 10,000 times (i.e. 10,000 performance matrices were generated) and the total number of times the normalization procedures performed with similar or different results as other procedures were recorded. Fig. 2 shows the result. DRVD 12 = 1 RVD 12 (22) DRVD13 = 1 RVD13 (23) DRVD14 = 1 RVD14 (24) DRVD23 = 1 RVD23 (25) DRVD24 = 1 RVD24 (26) DRVD34 = 1 RVD34 (27)

Proceedings of the 6th WSEAS Int. Conf. on Artificial Intelligence, Knowledge Engineering and Data Bases, Corfu Island, Greece, February 16-19, 2007 107 6000 0.900 0.800 5000 0.700 T1234 R a n k i n g S i m i l a r i t y 4000 3000 2000 T123 T124 T134 T234 T12 T13 T14 T23 T24 T34 TD1234 R a n k i n g C o n s i s t e n c y I n d e 0.600 0.500 0.400 0.300 RCI (N1) RCI (N2) RCI (N3) RCI (N4) 0.200 1000 0.100 0 1 2 3 4 5 6 7 8 9 10 0.000 1 2 3 4 5 6 7 8 9 10 Data ranges Data Range Fig. 2 Ranking similarity over various data ranges. As shown in Fig. 2, the number of times N1, N2, N3 and N4 produce the same ranking is almost 50% of the total run over different data ranges. There is an initial increase with but is decreased slightly when the range becomes too narrow. The same rankings produced by N1, N3 and N4 increases consistently with narrow data ranges and almost reached 50% of total run with the narrowest data range. In addition, the number of times the same rankings produced by N1 and N4 has an initial increase but gradually decreases over narrow data ranges. It is also evident that the combination of N1, N2 and N3 and the combination of N2 and N3 have a significant number of similarities which dropped heavily with narrow data ranges. Other combinations produce same ranking few times, which are further decreased with narrow data ranges. The result suggests that with wide data ranges, all four procedures produce similar outcomes for almost 50% of times. In particular N1, N3 and N4 produce similar results in most cases. RCI for normalization procedures N1, N2, N3 and N4 can be calculated by applying Equations (8) (11) on the ranking similarity result. Fig. 3 shows the results over various data ranges. Fig. 3 Ranking consistency inde for N1, N2, N3 and N4 over various data ranges. As shown in Fig. 3, N1 has the highest RCI, closely followed by N3 and N4. All three procedures have an increase in RCI as data ranges narrow. With narrowing data ranges, N4 shows slightly better RCI over N3. N2 shows a poor level of RCI, as compared to other procedures. N2 shows a decreasing trend over narrow data ranges. On the basis of ranking consistency, N1 performs best, followed by N3 and N4. B. Results for Overall Preference Value Consistency: In order to calculate the pairwise ratio deviation, the overall preference value matri, as shown in Fig. 1 was first generated using the results obtained from the ranking procedure. Equations (12) to (15) are then applied to obtain the total positive distance in preference values for each normalization procedure. Equations (16) to (21) are applied to calculate the pairwise ratio in the total distance for each pair of normalization procedures. Pairwise ratio deviations are obtained using equations (22) to (27). Fig. 4 shows the result.

Proceedings of the 6th WSEAS Int. Conf. on Artificial Intelligence, Knowledge Engineering and Data Bases, Corfu Island, Greece, February 16-19, 2007 108 1.200 1.000 deviation than N4 for wide data ranges but has a dramatic increase with narrow data ranges. In terms of overall preference value consistency, N1 performs best closely followed by N3. 1.000 0.800 0.900 P a i r W i s e d e v i a t i o n 0.600 0.400 0.200 DRVD12 DRVD13 DRVD14 DRVD23 DRVD24 DRVD34 A v e r a g e D e v i a t i o n 0.800 0.700 0.600 0.500 0.400 AD (N1) AD (N2) AD (N3) AD (N4) 0.300 0.000 1 2 3 4 5 6 7 8 9 10 Data Range 0.200 Fig. 4 Pair wise deviation in total overall preference value distance ratio. As indicated in Fig. 4, pairwise deviation for the N1 and N3 pair has the lowest deviation over all data ranges with a slight increase for narrower ranges. The N1 and N4 pair shows moderate deviation with a minor increase over all data ranges. Deviation for the N2 and N3 pair shows a low deviation in wide data ranges but increases dramatically with narrow data ranges. The pair of N1 and N2 starts with a moderate deviation but increases heavily with narrower ranges. The N3 and N4 pair starts with a comparatively high deviation and has a gradual increment over narrow data ranges. Deviation for the N2 and N4 pair is highest over all data ranges with an increase over narrow ranges. The average deviation for each of the normalization procedures N1, N2, N3 and N4 is calculated by using Equations (28) to (31) respectively. Fig. 5 shows the average deviation for each normalization procedure over various data ranges. As shown in Fig. 5, the average deviation is less for N1 and N3 over all data ranges with an increasing trend over narrow data ranges. N3 performs slightly better, although very similar, with wide data ranges, while N1 performs better over narrow data ranges. Deviation for N4 shows less variation over different data ranges but shows more deviation than N1 and N3. N2 shows less 0.100 0.000 1 2 3 4 5 6 7 8 9 10 Data Range Fig. 5 Average deviation for N1, N2, N3 and N4 5 Conclusion Multiattribute decision making generally requires using a normalization procedure to transform different measurement units of attributes to a comparable unit. The suitability study of a specific normalization procedure for a given MADM method is required. A simulation based study has been conducted to eamine the effects of commonly used normalization procedures, when applied with SAW for the general MADM problem. The result of the simulation study suggests that with ranking consistency and overall preference value consistency as the performance measures, vector normalization (N1) and linear scale transformation, method (N3) are more suitable for SAW in decision settings the attributes measurement units are diverse in range and there are a small number of alternatives to be ranked.

Proceedings of the 6th WSEAS Int. Conf. on Artificial Intelligence, Knowledge Engineering and Data Bases, Corfu Island, Greece, February 16-19, 2007 109 The new performance measures introduced and the new simulation process devised can be etended to other MADM methods and to determine the best combination of the normalization procedure and the MADM method. References [1] Churchman, C.W. and Ackoff R.L., An Approimate Measure of Value, Journal of the Operations Research Society of America, Vol 2, no. 2, pp172 187,1954. [2] Deng, H. and Yeh, C-H., Simulation-Based Evaluation of Defuzzification-Based Approaches to Fuzzy Multiattribute Decision Making, IEEE Trans. On Systems, Man, and Cybernetics Part A, Vol. 36, No. 6, November 2006. [3] Edwards, W., How to use Multiattribute Utility Measurement for Social Decision Making, IEEE Trans. On Systems, Man, and Cybernetics, Vol. 7, No. 5, pp.326-340, 1977. [4] Figueira, J., Mousseau, V. and Roy, B., ELECTRE Methods (Chapter 4), Multiple Criteria Decision Analysis State of the Art Surveys, edt. Figueira, J., Greco, S., Ehrgott, M., PP. 133 153, Springer, 2005. [5] Fishburn, P.C., Value Theory in J.J. Moder and S.E. Elmaghraby (Eds.), Handbook of Operations Research, Chapter III 3, Van Nostrand Reihold, New York, 1978. [6] Hwang, C.L. and K. Yoon, Multiple Attribute Decision Making: Methods and Applications, Springer-Verlag, New York, 1981. [7] Olson, D. L., Comparison of three multicriteria methods to predict known outcomes, European Journal of Operational Research, Vol. 130, Issue. 3, 2001, PP. 576 587. [8] Satty, T.L., A scaling Method for Priorities in Hierarchical Structure, J. Math. Psychology, Vol. 15, PP. 234 281, 1977. [9] Satty, T.L., How to Make a Decision: The Analytic Hierarchy Process, Interfaces, Vol. 24, No. 6, PP. 19-43, 1994. [10] Satty, T.L., The Analytic Hierarchy Process, McGraw- Hill, New York, 1980. [11] Starr, M.K., Production Management, Englewood Cliffs, NJ: Prentice-Hall, 1972. [12] Yeh, C-H., A problem-based selection of multiattribute decision-making methods, Intl. Trans. in Op. Res. 9, 2002, pp 169 181. [13] Yeh, C-H., The Selection of Multiattribute Decision Making Methods for Scholarship Student Selection, International Journal of Selection and Assessment, vol. 11, No. 4, Dec 2003. [14] Yoon, K.P. and Hwang, C.L., Multiple Attribute Decision Making: An Introduction, SAGE publications, London, 1995. [15] Zanakis S. H., Solomon A., Wishart N., and Dublish S., Multi-attribute decision making: A simulation comparison of selected methods, European Journal of Operational Research, Vol. 107, Issue. 3, 1998, PP. 507 529. [16] Zeleny, M., Multiple Criteria Decision Making, McGraw-Hill, New York, 1982.