Multi-objective Optimization Using Self-adaptive Differential Evolution Algorithm

Similar documents
Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

PARETO BAYESIAN OPTIMIZATION ALGORITHM FOR THE MULTIOBJECTIVE 0/1 KNAPSACK PROBLEM

Meta-heuristics for Multidimensional Knapsack Problems

NGPM -- A NSGA-II Program in Matlab

Maximum Variance Combined with Adaptive Genetic Algorithm for Infrared Image Segmentation

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation

MIXED INTEGER-DISCRETE-CONTINUOUS OPTIMIZATION BY DIFFERENTIAL EVOLUTION Part 1: the optimization method

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

CHAPTER 4 OPTIMIZATION TECHNIQUES

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

A Binarization Algorithm specialized on Document Images and Photos

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

An Optimal Algorithm for Prufer Codes *

An Efficient Genetic Algorithm with Fuzzy c-means Clustering for Traveling Salesman Problem

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

EVALUATION OF THE PERFORMANCES OF ARTIFICIAL BEE COLONY AND INVASIVE WEED OPTIMIZATION ALGORITHMS ON THE MODIFIED BENCHMARK FUNCTIONS

Intra-Parametric Analysis of a Fuzzy MOLP

Multiple Trajectory Search for Large Scale Global Optimization

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

Problem Definitions and Evaluation Criteria for the CEC 2015 Competition on Learning-based Real-Parameter Single Objective Optimization

APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT

Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real-Parameter Optimization

Collaboratively Regularized Nearest Points for Set Based Recognition

An Adaptive Multi-population Artificial Bee Colony Algorithm for Dynamic Optimisation Problems

Multiobjective fuzzy optimization method

A Novel Approach for an Early Test Case Generation using Genetic Algorithm and Dominance Concept based on Use cases

Research on Kruskal Crossover Genetic Algorithm for Multi- Objective Logistics Distribution Path Optimization

Cluster Analysis of Electrical Behavior

TPL-Aware Displacement-driven Detailed Placement Refinement with Coloring Constraints

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

A Facet Generation Procedure. for solving 0/1 integer programs

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

A Notable Swarm Approach to Evolve Neural Network for Classification in Data Mining

CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION

OPTIMIZING CNC TURNING PROCESS USING REAL CODED GENETIC ALGORITHM AND DIFFERENTIAL EVOLUTION

An Efficient Genetic Algorithm Based Approach for the Minimum Graph Bisection Problem

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields

Adaptive Weighted Sum Method for Bi-objective Optimization

An Adaptive Virtual Machine Location Selection Mechanism in Distributed Cloud

Multi-objective Optimization Using Adaptive Explicit Non-Dominated Region Sampling

Multi-Objective Design Exploration for Aerodynamic Configurations

Study on Multi-objective Flexible Job-shop Scheduling Problem considering Energy Consumption

Automatic Clustering Using a Synergy of Genetic Algorithm and Multi-objective Differential Evolution

Machine Learning: Algorithms and Applications

GA-Based Learning Algorithms to Identify Fuzzy Rules for Fuzzy Neural Networks

Classifier Selection Based on Data Complexity Measures *

Multi-objective Design Optimization of MCM Placement

GSLM Operations Research II Fall 13/14

Developing Four Metaheuristic Algorithms for Multiple-Objective Management of Groundwater

Complexity Analysis of Problem-Dimension Using PSO

Smoothing Spline ANOVA for variable screening

expermental results on NRP nstances. Secton V dscusses some related wor and Secton VI concludes ths paper. II. PRELIMINARIES Ths secton gves the defnt

FAST AND DETERMINISTIC COMPUTATION OF FIXATION PROBABILITY IN EVOLUTIONARY GRAPHS

Multi-objective Virtual Machine Placement for Load Balancing

An Improved Particle Swarm Optimization for Feature Selection

Arash Motaghedi-larijani, Kamyar Sabri-laghaie & Mahdi Heydari *

ARTICLE IN PRESS. Applied Soft Computing xxx (2012) xxx xxx. Contents lists available at SciVerse ScienceDirect. Applied Soft Computing

Evolutionary Multi-Objective Optimization for Mesh Simplification of 3D Open Models

Classifier Swarms for Human Detection in Infrared Imagery

3. CR parameters and Multi-Objective Fitness Function

Yuan Xu School of automation, Northwestern Polytechnical University, Xian , China. Huifeng Xue. Ailong Liu*

Multiobjective Optimization

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

MULTIOBJECTIVE OPTIMIZATION USING PARALLEL VECTOR EVALUATED PARTICLE SWARM OPTIMIZATION

An Image Fusion Approach Based on Segmentation Region

Support Vector Machines

Solving two-person zero-sum game by Matlab

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

An Optimization Approach for Path Synthesis of Four-bar Grashof Mechanisms

Efficient Distributed File System (EDFS)

Imperialist Competitive Algorithm with Variable Parameters to Determine the Global Minimum of Functions with Several Arguments

Adaptive Virtual Support Vector Machine for the Reliability Analysis of High-Dimensional Problems

Learning to Project in Multi-Objective Binary Linear Programming

Shape Optimization of Shear-type Hysteretic Steel Damper for Building Frames using FEM-Analysis and Heuristic Approach

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation

Discrete Cosine Transform Optimization in Image Compression Based on Genetic Algorithm

Neural Network Based Algorithm for Multi-Constrained Shortest Path Problem

Concurrent Apriori Data Mining Algorithms

Comparison of Heuristics for Scheduling Independent Tasks on Heterogeneous Distributed Environments

A New Approach For the Ranking of Fuzzy Sets With Different Heights

AN IMPROVED GENETIC ALGORITHM FOR RECTANGLES CUTTING & PACKING PROBLEM. Wang Shoukun, Wang Jingchun, Jin Yihui

GIS . ( NSGA-II (GA NSGA-II) GIS

LS-TaSC Version 2.1. Willem Roux Livermore Software Technology Corporation, Livermore, CA, USA. Abstract

Active Contours/Snakes

Data Mining For Multi-Criteria Energy Predictions

Wishing you all a Total Quality New Year!

LECTURE NOTES Duality Theory, Sensitivity Analysis, and Parametric Programming

Performance Study of Mode-Pursuing Sampling Method

A Modelling and a New Hybrid MILP/CP Decomposition Method for Parallel Continuous Galvanizing Line Scheduling Problem

Design for Reliability: Case Studies in Manufacturing Process Synthesis

Parallel Branch and Bound Algorithm - A comparison between serial, OpenMP and MPI implementations

The Codesign Challenge

MULTIOBJECTIVE PSO ALGORITHM IN PROJECT STAFF MANAGEMENT

Type-2 Fuzzy Non-uniform Rational B-spline Model with Type-2 Fuzzy Data

An Influence of the Noise on the Imaging Algorithm in the Electrical Impedance Tomography *

Air Transport Demand. Ta-Hui Yang Associate Professor Department of Logistics Management National Kaohsiung First Univ. of Sci. & Tech.

Transcription:

Mult-objectve Optmzaton Usng Self-adaptve Dfferental Evoluton Algorthm V. L. Huang, S. Z. Zhao, R. Mallpedd and P. N. Suganthan Abstract - In ths paper, we propose a Multobjectve Self-adaptve Dfferental Evoluton algorthm wth objectve-wse learnng strateges (OW-MOSaDE) to solve numercal optmzaton problems wth multple conflctng objectves. The proposed approach learns sutable crossover parameter values and mutaton strateges for each objectve separately n a mult-objectve optmzaton problem. The performance of the proposed OW-MOSaDE algorthm s evaluated on a sut of 13 benchmar problems provded for the CEC2009 MOEA Specal Sesson and Competton (http://www3.ntu.edu.sg/home/epnsugan/) on Performance Assessment of Constraned / Bound Constraned Mult-Objectve Optmzaton Algorthms. I. INTRODUCTION ANY real world problems can be formulated as Moptmzaton problems wth multple objectves. Snce the frst attempt to solve mult-objectve optmzaton problems by usng evolutonary algorthms [12], mult-objectve evolutonary algorthms (MOEAs) have been much researched and are wdely used to solve numerous applcatons [13][14] n recent years. MOEAs beneft from the evolutonary algorthm s ablty to generate a set of solutons concurrently n a sngle run thereby yeldng several trade-off solutons. Dfferental Evoluton (DE) [8] s one of the most commonly used EAs. It s a smple and powerful populaton-based stochastc drect search method for solvng numercal optmzaton problems n contnuous search space. In DE, one certan tral vector generaton strategy s requred to be pre-specfed wth ts parameters beng tuned va a tme-consumng tral and error scheme. Recently, we have developed a Self-adaptve Dfferental Evoluton (SaDE) algorthm, n whch both tral vector generaton strateges and ther assocated parameters can be automatcally adapted accordng to ther prevous experence of generatng superor or nferor offsprng than parent. Accordngly, as the search proceeds, dfferent strateges wth ther assocated parameters can be learned and appled to effcently evolve the populaton at dfferent stages. The SaDE algorthm has demonstrated promsng performance when solvng dfferent types of optmzaton problems [1][2][3][5]. In ths wor, we mprove the MOSaDE algorthm [5] wth V. L. Huang, S. Zhao, R. Mallpedd and P. N. Suganthan are wth School of Electrcal and Electronc Engneerng, Nanyang Technologcal Unversty, 50 Nanyang Ave., 639798 Sngapore (emals: huanglng@pmal.ntu.edu.sg, epnsugan@ntu.edu.sg) objectve-wse learnng strateges (called as OW-MOSaDE) to solve problems wth multple conflctng objectves and evaluate the performance on the 13 test problems [15]. The orgnal MOSaDE learns just one set of parameters for all the objectves. In MOPs, dfferent objectve functons may possess dfferent propertes. Hence, t can be benefcal to learn one set of parameters for each objectve as proposed n the OW-MOSaDE. II. MULTI-OBJECTIVE DIFFERENTIAL EVOLUTION The Mult-objectve Optmzaton Problem (MOP) can be defned as: Mnmze/Maxmze Fx ( ) = ( f1( x), f2( x),, f m ( x)) Subject to Gx ( ) = ( g1( x), g2( x),, g j ( x)) 0 where x = ( x1, x2, xn) and x s the decson vector, F(x) s the objectve vector, and the constrants g() x 0 determne the feasble regon. There already exsts some wor to adapt the control parameters of DE when solvng the mult-objectve opmzaton problems (MOPs). Abbass et al. ntroduced a Pareto-fronter Dfferental Evoluton algorthm (PDE) to solve MOPs by ncorporatng Pareto domnance[3]. Later the frst author self-adapted the crossover rate of PDE, by encodng the crossover rate nto each ndvdual and smultaneously evolvng wth other parameters. The scalng factor F was generated for each varable from a Gaussan dstrbuton N (0, 1) [7]. Zahare proposed a parameter adaptaton for DE (ADE) based on the concept of controllng the populaton dversty [10]. Followng the same deas, Zahare and Petcu desgned an adaptve Pareto DE algorthm for solvng MOPs and analyzed ts parallel mplementaton [11]. Xue et al. used a fuzzy logc controller (FLC) to dynamcally adjust the parameters of mult-objectve dfferental evoluton [9]. III. OBJECTIVE-WISE LEARNING IN MULTI-OBJECTIVE SADE The earler Mult-objectve SaDE algorthm (MOSaDE) [5] s an extenson of our recently developed SaDE [3] to optmze problems wth multple objectves. Smlar to SaDE, the MOSaDE algorthm automatcally adapts the tral vector generaton strateges and ther assocated parameters accordng to ther prevous experence of generatng superor

or nferor offsprng. When extendng the sngle-objectve SaDE algorthm to the mult-objectve doman, the crteron used to determne f the offsprng s superor or nferor to ts parent must be changed. However, n the orgnal MOSaDE algorthm, tral vector A s better than target vector B, f (1) ndvdual A domnates B, or (2) ndvdual A and ndvdual B are non-domnated wth each other, but A s less crowded than ndvdual B. Therefore, n case that the tral vector s better than the target vector accordng to ths crteron, the assocated parameter and strategy are recorded as done n SaDE. In our proposed objectve-wse MOSaDE (OW-MOSaDE), we have m+1 number of vectors, each correspondng to one objectve, to store the CR parameter value and strategy probablty values that mprove the partcular objectve, the other one vector s the non-domnance-related crtera used n the orgnal MOSaDE [5]. For example, n a 3 objectve problem, f a partcular set of parameter and strategy values can mprove objectves 1 and 2 then the vectors correspondng to objectves 1 and 2 are updated by the new parameter and strategy values. The two strateges ncorporated nto our proposed OW-MOSaDE algorthm are: DE/rand/1/bn: x + F x x f rand[0,1) < CR or j = j ( ) j, r, j r 1 2, j rand x j, otherwse u j, = DE/rand/2/bn: x j, + F x, x + F x, x f rand[0,1) CR or j j, 3 4, 5 < = u 6, rand j = x j, otherwse The ndces r 1, r 2, r 3, r 4, r 5, r 6 are mutually exclusve ntegers randomly generated wthn the range [1, NP], these ndces are randomly generated once for each mutant vector. These mutaton strateges have been commonly used when solvng MOPs [4][6][7][10][11]. The man steps of OW-MOSaDE are descrbed below: Step 1. Randomly ntalze a populaton of NP ndvduals. Intalze n_obj number of vectors each correspondng to one objectve. Each vector contans a strategy probablty (p, =1,,K, K s the no. of avalable strateges), the medan value of CR(CRm ) for each strategy, learnng perod (LP=50). Step 2. Evaluate the ndvduals n the populaton, and fll the external archve wth these ndvduals. Step 3. For each objectve, we have a set of parameters as p and CR. In the ntalzaton stage, we randomly select one of the m+1 sets of parameters to be used to generate the tral vector populaton. After selectng a set of parameter values assocated wth the chosen objectve, we perform the followng optmzaton loop: (1) Assgn tral vector generaton strategy and parameters to each target vector X (a) Use stochastc unversal samplng to select one strategy for each target vector X (b) Assgn control parameters F and CR F: Generate normally dstrbuted F values wth lnearly reducng mean value from 1.0 to 0.05 and standard devaton 0.1. CR: Generate the CR values under normal dstrbuton of mean CRm and standard devaton 0.1. After all the sets of parameters assgnment, (2) Generate a new populaton where each tral vector U s generated accordng to the assocated tral vector generaton strategy and parameter F and CR. (3) Selecton: FOR =1:NP (a) Evaluate the tral vector U, and compare wth the target vector X n() nearest to U n the soluton space. IF X n() domnates U, dscard U. ELSE IF U domnates X n(), replace X n() wth U ; IF non-domnated wth each other, randomly choose one to be the new target vector; U wll enter the external archve f () U domnates some ndvdual(s) of the archve (the domnated ndvduals n the archve are deleted); or () U s non-domnated wth archved ndvduals END IF (b) We compare U to X n() n each objectve to update the m+1 sets of parameters f offsprng s better than parent. In case that U s better than X n() n a partcle objectve, we record the assocated parameter CR and flag strategy correspondng to the objectve as successful objectve-wse vector. Otherwse, flag the vector as faled one. Calculate strategy probablty p whch s the percentage of the success rate of tral vectors generated by each strategy durng the learnng perod for each objectve separately. Updatng of CR: After the frst LP generaton, calculate CRm accordng to the recorded CR values. (c) When the external archve exceeds the maxmum specfed sze, we select the less crowded ndvduals based on harmonc average dstance [4] to mantan the archve sze. END FOR Step 4. Whle one of the termnaton condtons s not satsfed, go to Step 3. IV. EXPERIMENTAL RESULTS We evaluate the performance of the proposed objectve-wse MOSaDE algorthm (OW-MOSaDE) and the orgnal MOSaDE [5] on a set of 13 benchmar functons [15], whch nclude 7 two-objectve test functons, 3 three-objectve test functons and 3 fve-objectve test functons. The experments are desgned accordng to [15] and the expermental results are presented n the followng as requred by [15]. We use the same four learnng strateges

ntroduced n secton III n both algorthms. PC Confguraton: Wndows XP Professonal. Intel Pentum 4 CPU 3.00 GHZ 2 GB of memory. Language: MATLAB 7.1 Parameters Settng: The populaton sze s set at 50. The maxmum external archve sze s set at 100, 150 and 800 for 2, 3 and 5 objectves respectvely, as specfed n [15]. The maxmum FES s set as 3e+5. Results Acheved: For all the test functons, we recorded the approxmate set every generaton n each run. Accordng to these approxmate sets, we calculated the performance metrcs IGD [15]. The smallest, the largest, the mean and the standard devaton of the IGD values obtaned for each test nstance of the 30 runs are presented n Table I for OW-MOSaDE and the mean of the IGD values obtaned by orgnal MOSaDE s also shown n the table. Smaller IGD values ndcate better results. For the OW-MOSaDE, the plot of the fnal approxmaton set wth the smallest IGD value n the objectve space for some test nstances wth 2 and 3 objectves are shown n Fgs. 1-10. From IGD values shown n Table I, among all the 13 test problems, OW-MOSaDE performs better than the orgnal MOSaDE on all the 13 benchmar test functons. It demonstrates that OW-MOSaDE mproves upon the search ablty of the orgnal MOSaDE by usng the new objectve-wse learnng strategy and non-domnaton-wse learnng strategy nstead of only the non-domnaton-wse learnng strategy when solvng these mult-objectve test problems. V. CONCLUSION In ths paper, we enhanced the mult-objectve self-adaptve dfferental evoluton algorthm wth objectve-wse learnng of parameters and mutaton strateges. We assgn an ndvdual set of parameters the most approprate to for optmzng each objectve. The performance of our approach was evaluated on the test benchmar functons from CEC2009 specal sesson on Performance Assessment of Constraned / Bound Constraned Mult-Objectve Optmzaton Algorthms. Computaton (CEC 2005) Ednburgh, Scotland, IEEE Press, Sept. 2005, pp. 1785-1791. [2] V. L. Huang, A. K. Qn and P. N. Suganthan, Self-adaptve dfferental evoluton algorthm for constraned real-parameter optmzaton, n 2006 IEEE Congress on Evolutonary Computaton (CEC 06), Vancouver, BC, Canada, July 2006. [3] A. K. Qn, V. L. Huang and P. N. Suganthan, Dfferental evoluton algorthm wth strategy adaptaton for global numercal optmsaton, IEEE Trans on Evolutonary Computaton, DOI: 10.1109/TEVC.2008.927706, 2009. [4] V. L. Huang, P. N. Suganthan, A. K. Qn and S. Basar, Multobjectve dfferental evoluton wth external archve and harmonc dstance-based dversty measure, Techncal Report, Nanyang Technologcal Unversty, 2005. [5] V. L. Huang, A, K. Qn, P. N. Suganthan and M. F. Tasgetren, Mult-objectve Optmzaton based on Self-adaptve Dfferental Evoluton Algorthm, P. IEEE-CEC-07, Sept. 2007, Sngapore. [6] H. A. Abbass, R. Sarer and C. Newton. PDE: A Pareto-fronter dfferental evoluton approach for multobjectve optmzaton problems, n Proc. of IEEE Congress on Evolutonary Computaton, pp. 971-978. 2001. [7] H. A. Abbass, The self-adaptve Pareto dfferental evoluton algorthm, n Proc. of Congress on Evolutonary Computaton Vol.1, IEEE Press, 831-836. 2002. [8] R. Storn and K. V. Prce, Dfferental evoluton-a smple and effcent heurstc for global optmzaton over contnuous Spaces, Journal of Global Optmzaton, vol.11, pp.341-359. 1997. [9] F. Xue, A. C. Sanderson, P. P. Bonssone and R. J. Graves, Fuzzy logc controlled mult-objectve dfferental evoluton, The 14th IEEE Internatonal Conference on Fuzzy Systems, 2005:720-725. [10] D. Zahare, Control of populaton dversty and adaptaton n dfferental evoluton algorthms, n R. Matouse, P. Osmera (eds.), Proc. of Mendel 2003, 9th Internatonal Conference on Soft Computng, Brno, Czech Republc, June 2003, pp. 41-46. [11] D. Zahare and D. Petcu Adaptve pareto dfferental evoluton and ts parallelzaton, n Proc. of 5th Internatonal Conference on Parallel Processng and Appled Mathematcs, Czestochowa, Poland, Sept. 2003, pp. 261-268. [12] J. D. Schaffer, Multple objectve optmzaton wth vector evaluated genetc algorthms, In J. J. Grefenstette (Ed.), Proceedngs of an Internatonal Conference on Genetc Algorthms and Ther Applcatons, Pttsburgh, PA, pp. 93 100,1985. [13] K. Deb, Mult-Objectve Optmzaton Usng Evolutonary Algorthms. Chschester, U.K.: Wley, 2001. [14] C. A. C. Coello, D. A. V. Veldhuzen and G. B. Lamont, Evolutonary Algorthms for Solvng Mult-Objectve Problems, Kluwer Academc Publshers, New Yor, March 2002. [15] Qngfu Zhang, Amn Zhou, S. Z. Zhao, P. N. Suganthan, Wudong Lu and Santosh Twar, Multobjectve Optmzaton Test Instances for the CEC2009 Specal Sesson and Competton, Specal Sesson on Performance Assessment of Mult-Objectve Optmzaton Algorthms, Techncal Report, Unversty of Essex, Colchester, UK and Nanyang Technologcal Unversty, Sngapore, 2008. ACKNOWLEDGEMENT Authors acnowledge the fnancal support offered by the A*Star (Agency for Scence, Technology and Research) under the grant # 052 101 0020 to conduct ths research. REFERENCES [1] A. K. Qn and P. N. Suganthan, Self-adaptve dfferental evoluton algorthm for numercal optmzaton, n IEEE Congress on Evolutonary

Table I: The IGD value of the 30 fnal approxmaton sets obtaned for each test problem. Fun OW-MOSaDE Orgnal-MOSaDE Smallest (IGD) Largest (IGD) Mean (IGD) Std (IGD) Mean (IGD) 1. UF1 0.0103 0.0152 0.0122 0.0012 0.0983 2 UF2 0.0056 0.0135 0.0081 0.0023 0.0607 3. UF3 0.0727 0.1512 0.1030 0.0190 0.3248 4. UF4 0.0483 0.0562 0.0513 0.0019 0.0977 5. UF5 0.3991 0.4766 0.4303 0.0174 0.6963 6. UF6 0.0926 0.2271 0.1918 0.0290 0.3640 7. UF7 0.0208 0.1576 0.0585 0.0291 0.1916 8. UF8 0.0800 0.1199 0.0945 0.0119 0.4019 9. UF9 0.0574 0.1621 0.0983 0.0244 0.3984 10. UF10 0.5777 0.9402 0.7430 0.0885 2.9313 11. UF11 0.3360 0.4724 0.3951 0.0384 0.5450 12. UF12 593.3990 812.4774 734.5680 65.0659 783.2456 13. UF13 2.8298 4.5500 3.2573 0.3521 3.6881 FIGURE 1. The plot of the fnal approxmaton set wth the smallest IGD value n the objectve space for each test nstances UF1. FIGURE 3. The plot of the fnal approxmaton set wth the smallest IGD value n the objectve space for each test nstances UF3. FIGURE 2. The plot of the fnal approxmaton set wth the smallest IGD value n the objectve space for each test nstances UF2. FIGURE 4. The plot of the fnal approxmaton set wth the smallest IGD value n the objectve space for each test nstances UF4.

FIGURE 5. The plot of the fnal approxmaton set wth the smallest IGD value n the objectve space for each test nstances UF5. FIGURE 8. The plot of the fnal approxmaton set wth the smallest IGD value n the objectve space for each test nstances UF8. FIGURE 6. The plot of the fnal approxmaton set wth the smallest IGD value n the objectve space for each test nstances UF6. FIGURE 9. The plot of the fnal approxmaton set wth the smallest IGD value n the objectve space for each test nstances UF9. FIGURE 7. The plot of the fnal approxmaton set wth the smallest IGD value n the objectve space for each test nstances UF7. FIGURE 10. The plot of the fnal approxmaton set wth the smallest IGD value n the objectve space for each test nstances UF10.