Solving the Hard Knapsack Problems with a Binary Particle Swarm Approach

Similar documents
THE Multiconstrained 0 1 Knapsack Problem (MKP) is

CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION

Binary Differential Evolution Strategies

Modified Particle Swarm Optimization

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization

A Particle Swarm Optimization Algorithm for Solving Flexible Job-Shop Scheduling Problem

Inertia Weight. v i = ωv i +φ 1 R(0,1)(p i x i )+φ 2 R(0,1)(p g x i ) The new velocity update equation:

Feeder Reconfiguration Using Binary Coding Particle Swarm Optimization

Cell-to-switch assignment in. cellular networks. barebones particle swarm optimization

NEW BINARY PARTICLE SWARM OPTIMIZATION WITH IMMUNITY-CLONAL ALGORITHM

Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems

A MULTI-SWARM PARTICLE SWARM OPTIMIZATION WITH LOCAL SEARCH ON MULTI-ROBOT SEARCH SYSTEM

Three-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization

Heuristic Algorithms for the Fixed-Charge Multiple Knapsack Problem

A NOVEL BINARY SOCIAL SPIDER ALGORITHM FOR 0-1 KNAPSACK PROBLEM

IMPROVING THE PARTICLE SWARM OPTIMIZATION ALGORITHM USING THE SIMPLEX METHOD AT LATE STAGE

Local search heuristic for multiple knapsack problem

Improving Tree-Based Classification Rules Using a Particle Swarm Optimization

Metaheuristic Optimization with Evolver, Genocop and OptQuest

An Optimization of Association Rule Mining Algorithm using Weighted Quantum behaved PSO

Constrained Single-Objective Optimization Using Particle Swarm Optimization

An Improved Hybrid Genetic Algorithm for the Generalized Assignment Problem

Discrete Particle Swarm Optimization for TSP based on Neighborhood

Lamarckian Repair and Darwinian Repair in EMO Algorithms for Multiobjective 0/1 Knapsack Problems

Small World Particle Swarm Optimizer for Global Optimization Problems

Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization

Hybrid Optimization Coupling Electromagnetism and Descent Search for Engineering Problems

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES

Chapter 14 Global Search Algorithms

A NEW APPROACH TO SOLVE ECONOMIC LOAD DISPATCH USING PARTICLE SWARM OPTIMIZATION

Particle Swarm Optimization Approach for Scheduling of Flexible Job Shops

Particle Swarm Optimization applied to Pattern Recognition

The Size Robust Multiple Knapsack Problem

A Combinatorial Algorithm for The Cardinality Constrained Portfolio Optimization Problem

SIMULTANEOUS COMPUTATION OF MODEL ORDER AND PARAMETER ESTIMATION FOR ARX MODEL BASED ON MULTI- SWARM PARTICLE SWARM OPTIMIZATION

An improved PID neural network controller for long time delay systems using particle swarm optimization algorithm

Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding

An Approach to Polygonal Approximation of Digital CurvesBasedonDiscreteParticleSwarmAlgorithm

SwarmOps for Matlab. Numeric & Heuristic Optimization Source-Code Library for Matlab The Manual Revision 1.0

Using CODEQ to Train Feed-forward Neural Networks

A Hybrid Improvement Heuristic for the Bin Packing Problem

Particle Swarm Optimization

GRID SCHEDULING USING ENHANCED PSO ALGORITHM

Massively Parallel Approximation Algorithms for the Knapsack Problem

Tracking Changing Extrema with Particle Swarm Optimizer

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

Comparison of Evolutionary Multiobjective Optimization with Reference Solution-Based Single-Objective Approach

Optimization Using Particle Swarms with Near Neighbor Interactions

New Trials on Test Data Generation: Analysis of Test Data Space and Design of Improved Algorithm

Algorithms for the Bin Packing Problem with Conflicts

A Memetic Algorithm for Parallel Machine Scheduling

Algorithm Design Methods. Some Methods Not Covered

Construction of Minimum-Weight Spanners Mikkel Sigurd Martin Zachariasen

Reconfiguration Optimization for Loss Reduction in Distribution Networks using Hybrid PSO algorithm and Fuzzy logic

Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization

An Improved Blind Watermarking Scheme in Wavelet Domain

Revision of a Floating-Point Genetic Algorithm GENOCOP V for Nonlinear Programming Problems

Comparison of Some Evolutionary Algorithms for Approximate Solutions of Optimal Control Problems

Convolutional Code Optimization for Various Constraint Lengths using PSO

A New Modified Binary Differential Evolution Algorithm and its Applications

SwarmOps for Java. Numeric & Heuristic Optimization Source-Code Library for Java The Manual Revision 1.0

GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM

INTEGRATION OF INVENTORY CONTROL AND SCHEDULING USING BINARY PARTICLE SWARM OPTIMIZATION ALGORITHM

SWITCHES ALLOCATION IN DISTRIBUTION NETWORK USING PARTICLE SWARM OPTIMIZATION BASED ON FUZZY EXPERT SYSTEM

A Binary Model on the Basis of Cuckoo Search Algorithm in Order to Solve the Problem of Knapsack 1-0

Methods and Models for Combinatorial Optimization Heuristis for Combinatorial Optimization

THREE PHASE FAULT DIAGNOSIS BASED ON RBF NEURAL NETWORK OPTIMIZED BY PSO ALGORITHM

Study on GA-based matching method of railway vehicle wheels

MINIMAL EDGE-ORDERED SPANNING TREES USING A SELF-ADAPTING GENETIC ALGORITHM WITH MULTIPLE GENOMIC REPRESENTATIONS

Introduction to Optimization

A Firework Algorithm for Solving Capacitated Vehicle Routing Problem

APPLICATION OF BPSO IN FLEXIBLE MANUFACTURING SYSTEM SCHEDULING

Adaptively Choosing Neighbourhood Bests Using Species in a Particle Swarm Optimizer for Multimodal Function Optimization

Incorporation of Scalarizing Fitness Functions into Evolutionary Multiobjective Optimization Algorithms

PARTICLE SWARM OPTIMIZATION (PSO)

A Branch and Bound-PSO Hybrid Algorithm for Solving Integer Separable Concave Programming Problems 1

Solving the Large Scale Next Release Problem with a Backbone Based Multilevel Algorithm

SwarmOps for C# Numeric & Heuristic Optimization Source-Code Library for C# The Manual Revision 3.0

Particle swarm optimization for mobile network design

Hybrid Differential Evolution Algorithm for Traveling Salesman Problem

A New Discrete Binary Particle Swarm Optimization based on Learning Automata

A Hybrid Genetic Algorithm for the Distributed Permutation Flowshop Scheduling Problem Yan Li 1, a*, Zhigang Chen 2, b

International Conference on Computer Applications in Shipbuilding (ICCAS-2009) Shanghai, China Vol.2, pp

Constraints in Particle Swarm Optimization of Hidden Markov Models

Application of Improved Discrete Particle Swarm Optimization in Logistics Distribution Routing Problem

Simple Assembly Line Balancing Using Particle Swarm Optimization Algorithm

ARMA MODEL SELECTION USING PARTICLE SWARM OPTIMIZATION AND AIC CRITERIA. Mark S. Voss a b. and Xin Feng.

Introduction to Optimization

An Evolutionary Algorithm for the Multi-objective Shortest Path Problem

Study on the Development of Complex Network for Evolutionary and Swarm based Algorithms

A Kind of Wireless Sensor Network Coverage Optimization Algorithm Based on Genetic PSO

CHAPTER 5 STRUCTURAL OPTIMIZATION OF SWITCHED RELUCTANCE MACHINE

A PSO-based Generic Classifier Design and Weka Implementation Study

Particle Swarm Optimization to Solve Optimization Problems

Weight Annealing Heuristics for Solving the Two-Dimensional Bin Packing Problem

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Index Terms PSO, parallel computing, clustering, multiprocessor.

Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems

Solving A Nonlinear Side Constrained Transportation Problem. by Using Spanning Tree-based Genetic Algorithm. with Fuzzy Logic Controller

Transcription:

Solving the Hard Knapsack Problems with a Binary Particle Swarm Approach Bin Ye 1, Jun Sun 1, and Wen-Bo Xu 1 School of Information Technology, Southern Yangtze University, No.1800, Lihu Dadao, Wuxi, Jiangsu 214122, P.R. China yebinxie@yahoo.com.cn, sunjun wx@hotmail.com, xwb@sytu.edu.cn Abstract. Knapsack problems are important NP-Complete combinatorial optimization problems. Although nearly all the classical instances can be solved in pseudo-polynomial time nowadays, yet there are a variety of test problems which are hard to solve for the existing algorithms. In this paper we propose a new approach based upon binary particle swarm optimization algorithm (BPSO) to find solutions of these hard knapsack problems. The standard PSO iteration equations are modified to operate in discrete space. Furthermore, a heuristic operator based on the total-value greedy algorithm is employed into the BPSO approach to deal with constrains. Numerical experiments show that the proposed algorithm outperforms both the existing exact approaches and recent state-of-the-art search heuristics on most of the hard knapsack problems. 1 Introduction The well-known NP-complete knapsack problem (KP) is defined as: given a set of items with corresponding unit profits p j and unit weights w j,alongwitha knapsack capacity limit c, selects a subset of the items such that the total profit is maximized with the total weight not exceeding c. It can be assumed, without lose of generality, that all profits and weights are positive, that all weights are smaller than the knapsack capacity c and that the total weight exceeds the capacity. By introducing the binary decision variable x j,withx j = 1 if item j is selected, and x j = 0 otherwise, the classical 0/1 knapsack problem can be formulated as: n maximize p j x j subject to j=1 n w j x j c with x j {0, 1}, j=1,...,n. j=1 In the last few decades, many exact or heuristic techniques have been proposed to solve the knapsack problems. The exact algorithms include dynamic D.-S. Huang, K. Li, and G.W. Irwin (Eds.): ICIC 2006, LNBI 4115, pp. 155 163, 2006. c Springer-Verlag Berlin Heidelberg 2006

156 B. Ye, J. Sun, and W.-B. Xu programming [1,2] and branch-and-bound [3,4], while the heuristic search procedures focusing on solving the problem approximately include tabu search [5], genetic algorithm (GA) [6,7] and other randomized methods. A good overview of all recent exact approaches can be seen in [8]. It is shown that although the existing algorithms are capable of solving nearly all the KP instances cited in the existing literature within reasonable time, there are two groups of new test KP problems which are hard to solve. For the first group of difficult instances with large coefficients the running times of the dynamic programming algorithms are unacceptably high, while for the other group containing six categories of structurally instances with small coefficients the branch-and-bound algorithms perform badly. In this paper a novel BPSO algorithm is developed to solve the hard knapsack problems. A heuristic operator converting infeasible solutions into feasible solutions is applied to deal with the knapsack constrains. The procedure of the operator is based on the total-value heuristic which picks the item that contributes the highest total profit given the remaining knapsack capacity at each stage. In order to test the BPSO algorithm thoroughly, we adopt the technique in [8] to construct test instances. The experimental results of three different types of algorithms are compared with ours. The rest of the paper is organized as follows. In Section 2, a brief introduction of PSO is given. Then a binary particle swarm algorithm for knapsack problems are proposed in Section 3. Section 4 presents the generation of the two groups of difficult instances and the performance comparison of our algorithm and the most recent algorithms on the instances. A set of conclusions are given in Section 5. 2 Particle Swarm Optimization The particle swarm optimization was originally invented for the function optimization in continuous real-number spaces by Kennedy and Eberhart [9]. A review of its recent approaches to global optimization problems is presented in [10]. In a PSO model, a potential solution is represented as a particle with position X id and velocity V id in a D-dimensional space. Each particle maintains a record of the position with best fitness value the particle experienced so far, called personal best position or Pbest. Each particle share its Pbest with its neighborhoods, so there is a global best solution Gbest. Ateachsearchiteration, the ith particle moves according to the following equations: V id = V id + α (Pbest id X id )+β (Gbest d X id ) (1) X id = X id + V id (2) where α and β are random numbers whose upperbounds determine the influence of Pbest and Gbest. In order to improve the performance of the original PSO algorithm, some revised versions of PSO algorithm are proposed. One of the approaches is to introduce inertia weight ω into (1), so (1) is substituted by V id = ω V id + α (Pbest id X id )+β (Gbest d X id ). (3)

Solving the Hard Knapsack Problems 157 A BPSO algorithm which operates in discrete space has been developed in [11]. But this BPSO algorithm is susceptible to function saturations, which occur when velocity values are either too large or too small. A technique using angle modulation to reduce the complexity of binary problems is proposed in [12]. Though efficient it leaves the discrete constraint satisfaction problems untouched. We will present our BPSO algorithm based on the inertia weight PSO model in the next section. 3 A BPSO Algorithm for KP 3.1 Representation Like GA, the first step in implementing a BPSO algorithm is to design a scheme to denote individuals. Since the decision variable x j is binary, it is an obvious choice to represent a solution using an n-bit binary string, where n is the number of items in the KP. Thus each particle s position and velocity in our algorithm are initialized as n-bit binary-coded random vectors. 3.2 Iteration Equations Before presenting the iteration equations, we begin with introducing some new operators to be applied in our proposed BPSO. In a binary space, a particle moves to nearer or farther corners of the hypercube (searching space) by flipping bits in its position vector. So the distance between a particle s current position and its previous best position (in (3) denoted as Pbest id X id ofthedth dimension) can be stated by a vector whose bit is 1 if the alleles of X i and Pbest i are different and sets to 0 otherwise. For example, a particle s current position and its previous best position are: X : (10011) Pbest: (00011) so the distance will be (10000) since the first bits in the two vectors are different. From this simple example, it can be observed that the distance is the output of the binary XOR function which takes X and Pbestas its two inputs. In this way, the operation Pbest id X id in (3) will be substituted with Pbest id X id. Similarly, the distance between a particle s current position and the global best position will be evaluated by Gbest X i. From (3), it can be seen that a particle s velocity at iteration t+1 is primarily determined by three elements: the velocity at iteration t, the distance between X i and Pbest and the distance between X i and Gbest. Since the functions of the three elements (also vectors) are to reverse the corresponding bits in the position vector consistently, they can be united into one vector. Consequently,

158 B. Ye, J. Sun, and W.-B. Xu this procedure is implemented using the OR operation. Here is an example: supposing the three elements are A : (10001) B : (01000) C : (00100) respectively, then the velocity will be V = A + B + C = (11101). A particle s position vector will be updated at the next step using the velocity vector. If any bit in the velocity vector is 1, the allele in the position vector will be reversed. And this operation is a equivalence of binary XOR function. This can be illustrated by going on with the previous example. With a arbitrary position vector being (10000), the updated position vector will be (01101) at next iteration. As a result, a particle in a binary space moves according to the following equations: V id = ω V id + α (Pbest id X id )+β (Gbest d X id ) (4) X id = X id V id (5) where inertia weight ω is generally set to less than 1.0. α and β are called acceleration coefficients used to control the convergence speed of the algorithm. Based on the previous work in [13], we have set the typical parameters for a population of 50, the inertia weight ω of 0.729, α of 1.49 and β of 1.49. 3.3 Constraints Handling Obviously, the solutions generated by (4) may not be feasible because one of the knapsack constraints may be violated. To deal with constraints, a number of standard ways are proposed. Comparing their performance through preliminary experiments, we adopt in our BPSO the approach of using a heuristic operator to convert an infeasible solution to a feasible one. The heuristic operator is traditionally based on a density-ordered greedy algorithm which picks the item with the highest unit profit to unit weight ratio at each stage. Instead of using the density-ordered greedy algorithm, our heuristic operator is based on a total-value greedy algorithm in which the item with highest profit will be selected if its weight does not exceed the remaining knapsack capacity at each stage. It has been shown that the total-value greedy heuristic dominates the density-ordered greedy algorithm with regard to both the worstcase performance and the average-case performance [14]. Our heuristic operator is constituted of two phases: the drop phase and the add phase. The drop phase is implemented as follows. Once a solution X i = (x 1,x 2,...,x n ) generated by BPSO is infeasible, first calculate its redundancy weight n j=1 (w jx j c) (bigger than zero obviously). Then, among the items picked in the infeasible solution, find out the subset of items with the weight of each item is smaller than the redundancy weight. Throw out the items one by one in the subset with ascent order in weight until the solution becomes feasible.

Solving the Hard Knapsack Problems 159 The add phase aim to improve the fitness of a feasible solution. Given the remaining knapsack capacity, the add phase continue to add the item with the largest profit among the items that are not included in the solution and whose weights are smaller than the remaining knapsack capacity until feasibility is violated. 3.4 Algorithm Outline The general steps in our algorithm are described as follows: initialize each particle with a random position and velocity; initialize t_max; while t<t_max { for i=1:population size { if a solution is infeasible { make the solution feasible; } calculate fitness value; } find Pbest and Gbest; updating each particle s position according to (4,5); } 4 Computational Experiments In order to test our algorithm for the knapsack problem more thoroughly, we will analyze its performance in terms of both efficiency and accuracy. Efficiency is a measure of the time required to complete the search and accuracy is what evaluates the quality of the solutions obtained. The test instances we use for measurements are some of the instances from [8], which contains a large variety of instances types. The performance for different instance types and data ranges of our approach is compared with that of the GA [6] and two well-known exact approaches, namely Expknap [4] and Minknap [1]. Two groups of difficult instances are considered in our experiments. One group consists of the traditional instances with larger coefficients; and the other group includes instances with small coefficients, but where present algorithms perform badly. The first group make the dynamic programming algorithm run slower while the second group mainly challenge the branch-and-bound algorithms. 4.1 Difficult Instances with Large Coefficients Six types of traditional data instances are briefly described below. Because the traditional test instances with small data range are too easy to draw any meaningful conclusions, we choose each type with data range R=10 6 and 10 7 for different problem sizes (i.e. n=100, 500, 1000, 5000 and 10000) for testing. Uncorrelated instances (uncorr.): The weights w j and the profits p j are chosen randomly in [1,R].

160 B. Ye, J. Sun, and W.-B. Xu Weakly correlated instances (weak corr.): The weights w j are distributed in [1,R] and the profits p j in [w j R/10,w j + R/10] such that p j 1. Strongly correlated instances (str. corr.): The weights are chosen in [1,R]and the profits are set to p j = w j + R/10. Inverse strongly correlated instances (inv. str. corr.): The profits p j are chosen in [1,R]andw j = p j + R/10. Almost strongly correlated instances (al. st. corr.): The weights w j are distributed in [1,R] and the profits in [w j + R/10 R/500,w j + R/10 + R/500]. Subset-sum instances (sub.sum): The weights w j are distributed in [1,R]and p j = w j. In order to eliminate the capacity-dependency of the performances, we choose the capacity in each instance as c = h 1000 + 1 for instance number h =1, 2,...,1000. We test all the instances on an Intel Pentium 4, 2.9GHz with 256M RAM. The average execution time for each instance type, which is also the average time the BPSO takes for all the 1000 instances, is calculated. Table 1-3 gives the results for the three algorithms. For each instance type, if not all instances are solved in a time limit of 30 minutes or space limit, it is marked with a in the table. Best-so-far values searched in 50 runs by our algorithm and GA during the same time period are compared in Table 4. A typical run for GA is set for a probability of crossover of 0.8; a probability of mutation of 0.01; and a population of 50 (same as the population in our BPSO algorithm). In each run we choose the time limit of 10 minutes as the stopping conditions for GA and our BPSO. It is clear that our BPSO algorithm has a stable performance on all the instance types whereas the exact algorithm Expknap can solve only a few instances within the given time or space limit. For the dynamic programming algorithm Minknap, the average execution time for the strongly correlated and the inverse strongly correlated instances grow to run out of the time limit due to the rapid increasing of the computational complexity. From the results in Table 4, it can be observed that the near-optimal solutions obtained by our BPSO algorithm are more closer to the optimal solution than those obtained by GA for a majority of instances. n j=1 4.2 Difficult Instances with Small Coefficients The following are some difficult instances with small coefficients. The capacity is chosen as in the previous section. The outcomes are summarized in Table 5-7. Spanner instances(v,m): The weights w k of a set of v items (the spanner set) are chosen in [1,R] randomly and the profits according to the three distributions (uncorrelated, weakly correlated and strongly correlated respectively). Then the v items are normalized by setting p k =[2p k /m] andw k =[2w k /m]. The n items are constructed by repeatedly multiplying a random number in the interval w j

Solving the Hard Knapsack Problems 161 Table 1. Average execution times (ms) for instances with large coefficients, Expknap uncorr. weakcorr. str.corr. inv.str.corr. al.str.corr. sub.sum n\r 10 6 10 7 10 6 10 7 10 6 10 7 10 6 10 7 10 6 10 7 10 6 10 7 100 0.1 - - - - - - - 627.1-13.1 187.4 500 0.1 0.1 - - - - - - - - 18.9-1000 0.3 - - - - - - - - - 27.6-5000 1.1 1.3 - - - - - - - - - - 10000 2.4 - - - - - - - - - - - Table 2. Average execution times (ms) for instances with large coefficients, Minknap uncorr. weakcorr. str.corr. inv.str.corr. al.str.corr. sub.sum n\r 10 6 10 7 10 6 10 7 10 6 10 7 10 6 10 7 10 6 10 7 10 6 10 7 100 0.2 0.2 0.6 0.5 755.1-442.3-12.4 8.9 553.6-500 0.2 0.3 0.5 0.6 - - - - 256.1 247.3 512.6-1000 0.4 0.4 0.9 0.9 - - - - 636.8 897.1 681.7-5000 1.4 1.4 7.3 4.8 - - - - 8634.7-736.3-10000 2.6 3.3 18.3 20.7 - - - - - - 841.8 - Table 3. Average execution times (ms) for instances with large coefficients, BPSO uncorr. weakcorr. str.corr. inv.str.corr. al.str.corr. sub.sum n\r 10 6 10 7 10 6 10 7 10 6 10 7 10 6 10 7 10 6 10 7 10 6 10 7 100 7.3 8.7 7.1 7.1 6.7 7.2 7.3 6.8 6.9 8.1 7.3 9.8 500 37.9 20.9 38.4 31.4 31.2 25.6 38.5 46.2 35.4 36.3 51.4 21.2 1000 77.1 42.4 59.4 53.7 41.0 40.6 79.7 87.5 41.6 40.9 52.8 32.8 5000 318.3 411.2 309.0 305.5 304.1 303.6 312.8 344.6 303.6 411.1 308.6 325.3 10000 976.2 963.6 879.4 1160.1 935.6 1267.3 1577.6 1960.7 881.9 923.1 919.5 939.2 Table 4. Optimal values found by GA and our BPSO, R =10 6, c = 1 n 2 j=1 wj for all the instances uncorr. str.corr. n GA BPSO GA BPSO 100 39465410 39465410 32995624 33048651 500 201957104 202661372 157736230 157803943 1000 400917743 403053093 315650918 315789002 5000 2436292167 2446654061 1593860725 1597840513 10000 3751841203 3759039860 2800845121 2801430527 [1,m] and any item from the spanner set. Here, we will consider uncorrelated span(2,10), weakly correlated span(2,10) and strongly correlated span(2,10). Multiply strongly correlated instances mstr(k 1,k 2,d): The weights are chosen in [1,R] randomly. If the weight can divide exactly by d, then the profit p j =w j +k 1, otherwise p j =w j +k 2. The parameters are k 1 =3R/10, k 2 =2R/10, d=6.

162 B. Ye, J. Sun, and W.-B. Xu Table 5. Average execution times (ms) for instances with small coefficients, Expknap span(2, 10) mstr( 3R 10 10 n uncorr. weak. corr. str. corr. circle( 2 ) 3 100 - - - 52.1-24.7 500 - - - - - - 1000 - - - - - - 5000 - - - - - - 10000 - - - - - - Table 6. Average execution times (ms) for instances with small coefficients, Minknap span(2, 10) mstr( 3R 10 10 n uncorr. weak. corr. str. corr. circle( 2 ) 3 100 0.1 0.1 0.1 1.0 0.5 0.8 500 1.2 1.3 2.1 12.4 5.4 37.4 1000 15.6 23.9 17.9 40.0 39.3 54.9 5000 363.2 817.4 803.1 465.0 1133.5 613.9 10000 1142.9 1600.4 3720.4 1350.8 3661.2 1109.5 Table 7. Average execution times (ms) for instances with small coefficients, BPSO span(2, 10) mstr( 3R 10 10 n uncorr. weak. corr. str. corr. circle( 2 ) 3 100 7.9 5.7 7.2 7.3 3.9 3.5 500 34.4 28.4 28.1 32.2 36.5 21.4 1000 65.6 73.9 65.9 87.0 74.3 77.3 5000 667.1 690.9 633.1 912.0 764.0 868.7 10000 2595.0 1967.3 2136.6 1974.5 2003.1 1084.3 Table 8. Optimal values found by GA and BPSO, for three types of difficult instances, R=1000 str.corr.span(2, 10) mstr( 3R, 6) pceil(3) 10 10 n GA BPSO GA BPSO GA BPSO 100 13488 13497 39613 39710 24210 24210 500 53016 53035 199662 199673 126543 126501 1000 366142 366061 405512 405547 244767 244776 5000 1069621 1070096 2025096 2030286 1255665 1255665 10000 2233409 2233650 4055139 4050196 2515947 2516037 Profit ceiling instances(d): The weights are randomly distributed in [1,R] and profits p j =d w j /d. The parameter d is chosen as d=3. Circle instances(d): The weights are randomly distributed in [1,R] and p = d 4R 2 (w 2R) 2.Wechoosed = 2 3. From these results, it is clear that Expknap has the worst performance than any other algorithms. Although Minknap is a bit faster than our algorithm for

Solving the Hard Knapsack Problems 163 smaller problem size n, the solution times for both algorithm are very stable. Here again, Table 8 shows that our BPSO algorithm performs better than GA. 5 Conclusion In this paper, we proposed a novel binary particle swarm approach and apply it to the hard knapsack problems. Based on the total-value greedy algorithm, a heuristic operator is designed to handle the knapsack constrains. The approach has been thoroughly evaluated with different instance types and problem sizes. The evaluation is made by comparing our algorithm with some of the best known approaches in the existing literatures. As the obtained results show, our approach exhibits an excellent level of accuracy and efficiency to the hard knapsack problems. References 1. Pisinger, D.: A Minimal Algorithm for the 0-1 Knapsack Problem. Operations Research, 45 (1997) 758 767 2. Martello, S., Pisinger, D., Toth, P.: Dynamic Programming and Strong Bounds for the 0-1 Knapsack Problem. Management Science, 45 (1999) 414 424 3. Martello, S., Toth, P.: A New Algorithm for the 0-1 Knapsack Problem. Management Science, 34 (1988) 633 644 4. Pisinger D.: An Expanding-Core Algorithm for the Exact 0-1 Knapsack Problem. European Journal of Operational Research, 87 (1995) 175 187 5. Gandibleux, X., Freville, A.: Tabu Search Based Procedure for Solving the 0-1 Multiobjective Knapsack Problem: The two Objectives Case. Journal of Heuristics, 6 (2000) 361 383 6. Chu, P.C., Beasley, J. E.: A Genetic Algorithm for the Multidimensional Knapsack Problem. Journal of Heuristics, 4 (1998) 63 86 7. Thiel, J., Voss, S.: Some Experiences on Solving Multiconstraint Zero-One Knapsack Problems with Genetic Algorithms. INFOR, Canada, 32 (1994) 226 242 8. Pisinger, D.: Where are the Hard Knapsack Problems? Computers & Operations Research, 32 (2005) 2271 2284 9. Kennedy, J., Eberhart, R. C.: Particle Swarm Optimization. IEEE int. Conf. on Neural Networks, Perth Australia, 325 (1995) 1942 1948 10. Parsopoulos, K. E., Vrahatis, M. N.: Recent Approaches to Global Optimization Problems through Particle Swarm Optimization. Natural Computing: An International Journal, 1 (2002) 235 306 11. Kennedy, J., Eberhart, R. C.: A Discrete Binary Version of the Particle Swarm Algorithm. In Proceedings of the World Multiconference on Systemics, Cybernetics and Informatics, IEEE Press, (1997) 4104 4109 12. Pampara, G., Franken, N.,Engelbrecht, A. P.: Combining Particle Swarm Optimization with Angle Modulation to Solve Binary Problems. IEEE Congress on Evolutionary Computation, 1 (2005) 89 96 13. Van Den Bergh, F.: An Analysis of Particle Swarm Optimizers. PhD thesis, Department of Computer Science, University of Pretoria, South Africa (2002) 14. Kohli, R., Krishnamurti, R., Mirchandani. P.: Average Performance of Greedy Heuristics for the Integer Knapsack Problem. European Journal of Operational Research 154 (2004) 36 45