Step Size Optimization of LMS Algorithm Using Particle Swarm Optimization Algorithm in System Identification

Similar documents
Hybrid Particle Swarm-Based-Simulated Annealing Optimization Techniques

Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

The Pennsylvania State University. The Graduate School. Department of Electrical Engineering COMPARISON OF CAT SWARM OPTIMIZATION WITH PARTICLE SWARM

A NEW APPROACH TO SOLVE ECONOMIC LOAD DISPATCH USING PARTICLE SWARM OPTIMIZATION

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization

Reconfiguration Optimization for Loss Reduction in Distribution Networks using Hybrid PSO algorithm and Fuzzy logic

Particle Swarm Optimization Approach for Scheduling of Flexible Job Shops

PARTICLE SWARM OPTIMIZATION (PSO)

Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization

Optimization of Benchmark Functions Using Artificial Bee Colony (ABC) Algorithm

Modified Particle Swarm Optimization

An improved PID neural network controller for long time delay systems using particle swarm optimization algorithm

Feeder Reconfiguration Using Binary Coding Particle Swarm Optimization

LECTURE 16: SWARM INTELLIGENCE 2 / PARTICLE SWARM OPTIMIZATION 2

A Multiobjective Memetic Algorithm Based on Particle Swarm Optimization

Three-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization

GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM

KEYWORDS: Mobile Ad hoc Networks (MANETs), Swarm Intelligence, Particle Swarm Optimization (PSO), Multi Point Relay (MPR), Throughput.

A Novel Probabilistic-PSO Based Learning Algorithm for Optimization of Neural Networks for Benchmark Problems

A MULTI-SWARM PARTICLE SWARM OPTIMIZATION WITH LOCAL SEARCH ON MULTI-ROBOT SEARCH SYSTEM

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES

QUANTUM BASED PSO TECHNIQUE FOR IMAGE SEGMENTATION

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

OPTIMUM CAPACITY ALLOCATION OF DISTRIBUTED GENERATION UNITS USING PARALLEL PSO USING MESSAGE PASSING INTERFACE

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

Particle Swarm Optimization applied to Pattern Recognition

A Modified PSO Technique for the Coordination Problem in Presence of DG

Tracking Changing Extrema with Particle Swarm Optimizer

Adaptive Filtering using Steepest Descent and LMS Algorithm

Parameter Estimation of PI Controller using PSO Algorithm for Level Control

Cell-to-switch assignment in. cellular networks. barebones particle swarm optimization

Particle swarm optimization for mobile network design

SIMULTANEOUS COMPUTATION OF MODEL ORDER AND PARAMETER ESTIMATION FOR ARX MODEL BASED ON MULTI- SWARM PARTICLE SWARM OPTIMIZATION

Robust Descriptive Statistics Based PSO Algorithm for Image Segmentation

Particle Swarm Optimization

Designing of Optimized Combinational Circuits Using Particle Swarm Optimization Algorithm

Discrete Particle Swarm Optimization for TSP based on Neighborhood

IMPROVING THE PARTICLE SWARM OPTIMIZATION ALGORITHM USING THE SIMPLEX METHOD AT LATE STAGE

Model Parameter Estimation

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

An Efficient FIR Filter Design Using Enhanced Particle Swarm Optimization Technique

Solving Economic Load Dispatch Problems in Power Systems using Genetic Algorithm and Particle Swarm Optimization

Dr. Ramesh Kumar, Nayan Kumar (Department of Electrical Engineering,NIT Patna, India, (Department of Electrical Engineering,NIT Uttarakhand, India,

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization

CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION

Particle Swarm Optimization

THREE PHASE FAULT DIAGNOSIS BASED ON RBF NEURAL NETWORK OPTIMIZED BY PSO ALGORITHM

A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

Application of Improved Discrete Particle Swarm Optimization in Logistics Distribution Routing Problem

Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding

Modified Particle Swarm Optimization with Novel Modulated Inertia for Velocity Update

Automatic differentiation based for particle swarm optimization steepest descent direction

Kyrre Glette INF3490 Evolvable Hardware Cartesian Genetic Programming

Variable Neighborhood Particle Swarm Optimization for Multi-objective Flexible Job-Shop Scheduling Problems

Surrogate-assisted Self-accelerated Particle Swarm Optimization

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems

Optimized Algorithm for Particle Swarm Optimization

Particle Swarm Optimization For N-Queens Problem

Dynamic Economic Dispatch for Power Generation Using Hybrid optimization Algorithm

ARMA MODEL SELECTION USING PARTICLE SWARM OPTIMIZATION AND AIC CRITERIA. Mark S. Voss a b. and Xin Feng.

International Conference on Modeling and SimulationCoimbatore, August 2007

ATI Material Do Not Duplicate ATI Material. www. ATIcourses.com. www. ATIcourses.com

A METHOD FOR DIAGNOSIS OF LARGE AIRCRAFT ENGINE FAULT BASED ON PARTICLE SWARM ROUGH SET REDUCTION

PARTICLE SWARM OPTIMIZATION (PSO) [1] is an

EE 553 Term Project Report Particle Swarm Optimization (PSO) and PSO with Cross-over

1 Lab + Hwk 5: Particle Swarm Optimization

1 Lab 5: Particle Swarm Optimization

Fast Hybrid PSO and Tabu Search Approach for Optimization of a Fuzzy Controller

The movement of the dimmer firefly i towards the brighter firefly j in terms of the dimmer one s updated location is determined by the following equat

Comparison of Some Evolutionary Algorithms for Approximate Solutions of Optimal Control Problems

Inertia Weight. v i = ωv i +φ 1 R(0,1)(p i x i )+φ 2 R(0,1)(p g x i ) The new velocity update equation:

The Pennsylvania State University. The Graduate School. Department of Electrical Engineering

Optimized Watermarking Using Swarm-Based Bacterial Foraging

A Native Approach to Cell to Switch Assignment Using Firefly Algorithm

Generation of Ultra Side lobe levels in Circular Array Antennas using Evolutionary Algorithms

Aero-engine PID parameters Optimization based on Adaptive Genetic Algorithm. Yinling Wang, Huacong Li

A Novel Hybrid Self Organizing Migrating Algorithm with Mutation for Global Optimization

A New Technique using GA style and LMS for Structure Adaptation

A Novel Hénon Map Based Adaptive PSO for Wavelet Shrinkage Image Denoising

Effectual Multiprocessor Scheduling Based on Stochastic Optimization Technique

Small World Particle Swarm Optimizer for Global Optimization Problems

An Efficient Analysis for High Dimensional Dataset Using K-Means Hybridization with Ant Colony Optimization Algorithm

The Applications of Computational Intelligence (Ci) Techniques in System Identification and Digital Filter Design

Optimal Power Flow Using Particle Swarm Optimization

Feature weighting using particle swarm optimization for learning vector quantization classifier

An Overview of Particle Swarm Optimization Variants

Artificial bee colony algorithm with multiple onlookers for constrained optimization problems

PARTICLE SWARM OPTIMIZATION APPLICATION IN OPTIMIZATION

Open Access Research on the Prediction Model of Material Cost Based on Data Mining

Discrete Multi-Valued Particle Swarm Optimization

PARTICLE Swarm Optimization (PSO), an algorithm by

Particle Swarm Optimization Based Approach for Location Area Planning in Cellular Networks

Dynamic synthesis of a multibody system: a comparative study between genetic algorithm and particle swarm optimization techniques

A HYBRID ALGORITHM BASED ON PARTICLE SWARM OPTIMIZATION

Small World Network Based Dynamic Topology for Particle Swarm Optimization

Scheduling Meta-tasks in Distributed Heterogeneous Computing Systems: A Meta-Heuristic Particle Swarm Optimization Approach

A Comparative Study of the Application of Swarm Intelligence in Kruppa-Based Camera Auto- Calibration

Transcription:

IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.6, June 2013 125 Step Size Optimization of LMS Algorithm Using Particle Swarm Optimization Algorithm in System Identification Rajni and AkashTayal, Guru Gobind Singh Indrapastha University, Delhi,India Summary System identification is the art and science of building mathematical models of dynamic systems from observed inputoutput data.this paper combines Particle Swarm Optimization Algorithm and LMS algorithm to describe the application of a Particle swarm Optimization (PSO) to the problem of parameter optimization for an adaptive Finite Impulse Response (FIR) filter. LMS algorithm computes the filter coefficients and PSO search the optimal step-size adaptively. Because step-size influences on the stability and performance, so it is necessary to apply method that can control it.. However, the statistical Least Mean Squares method is faster than the genetic algorithm. For this reason we suggest using the genetic algorithm for off-line applications, and the statistical method for on-line adaptation. A hybrid method combining the advantages of both methods is proposed for real world applications. Keywords FIR, LMS, Particle Swarm Optimization, System Identification 1. Introduction One weakness of conventional PSO is that its local search is not guaranteed convergent; its local search capability lies primarily in the swarm size and search parameters. On the other hand, the problem with simply running a bruteforce population of independent LMS algorithms is that there is no collective information exchange between population members, which makes the algorithm inefficient and prone to the local minimum problem of standard LMS [1]. Therefore, it is desirable to combine the convergent local search capabilities of the LMS algorithm with the global search of PSO. When initialized in the global optimum valley, the LMS algorithm can be tuned to provide an optimal rate of convergence without apprehension of encountering a local minimum [2]. Therefore, by using a structured stochastic search, such as PSO, to quickly focus the population on regions of interest, an optimally tuned LMS algorithm can take over and provide better results than standard LMS. An important step in System identification procedure is the estimation of parameters. When an input is applied to both the system and model, and the difference between the target system's output and model's output is used in appropriate manner to update a parameter vector to reduce this difference. To apply the parameter vector, we use LMS algorithm. Because of the computational simplicity of the LMS algorithm, this algorithm is widely used. But it suffers from a slow rate of convergence. Further, for implementation of LMS algorithm, we need to select appropriate value of the step size, which affects the stability and performance. We have search algorithm, Particle Swarm Optimization Algorithm (PSO) to control the value of the step size in accordance with the input adaptively.. This paper introduces a novel algorithm named particle swarm optimization (PSO) to optimize the step size of LMS algorithm and then LMS algorithm calculate system identification parameters adaptively. PSO is a population based search similar to the genetic algorithm (GA) [3]. In adaptive filtering, the mean squared error (MSE) between the output of the unknown system and the output of the AF is the typical cost function, and will hence be used for the fitness evaluation of each particle in the on-line form of PSO. In this we also have taken MSE as cost function. 2. Swarm Intelligence Swarm intelligence describes the collective behavior of decentralized, self organized natural or artificial systems. Swam intelligence model were employed in artificial intelligence. The expression was introduced in the year 1989 by Jing wang and Gerardo Beni in cellular robotic systems. Swarm Intelligence (SI) was a innovative pattern for solving optimizing problems. SI systems are typically made up of populations of simple agents interacting locally with one another and with their environment [4]. The agent follows simple rules and the interactions between agents lead to the emergence of intelligent global behavior, unknown to the individual agents. 2.1 PSO an Optimization Tool Particle swarm optimization (PSO) is a population based stochastic optimization technique developed by Dr. Ebehart and Dr. Kennedy in 1995, inspired by social behavior of bird flocking or fish schooling [3]. PSO shares many similarities with evolutionary computation Manuscript received June 5, 2013 Manuscript revised June 20, 2013

126 IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.6, June 2013 techniques such as Genetic Algorithms (GA). The system is initialized with a population of random solutions and searches for optima by updating s. However, unlike GA, PSO has no evolution operators such as crossover and mutation. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. The detailed information will be given in following sections. Compared to GA, the advantages of PSO are that PSO is easy to implement and there are few parameters to adjust. PSO has been successfully applied in many areas: function optimization, artificial neural network training, fuzzy system control, and other areas where GA can be applied. 2.2 Particle Swarm Optimization Algorithm PSO simulates the behaviors of bird flocking. Suppose the following scenario: a group of birds are randomly searching food in an area. There is only one piece of food in the area being searched. All the birds do not know where the food is. But they know how far the food is in each iteration. So what's the best strategy to find the food? The effective one is to follow the bird, which is nearest to the food. PSO learned from the scenario and used it to solve the optimization problems. In PSO, each single solution is a "bird" in the search space. We call it "particle". All of particles have fitness values, which are evaluated by the to be optimized, and have velocities, which direct the flying of the particles. The particles fly through the problem space by following the current optimum particles [3]. PSO is initialized with a group of random particles (solutions) and then searches for optima by updating s. In each iteration, each particle is updated by following two "best" values. The first one is the best solution (fitness) it has achieved so far. (The fitness value is also stored.) This value is called pbest. Another "best" value that is tracked by the particle swarm optimizer is the best value, obtained so far by any particle in the population. This best value is a global best and called g-best. When a particle takes part of the population as its topological neighbors, the best value is a local best and is called p-best. After finding the two best values, the particle updates its velocity and positions with following equation: V (u+1) i = wv (u) i +C 1 rand 1 ( ) * (pbest i -P (u) i ) + C 2 rand 2 ( ) * (gbest-p (u) i ) (1) P (u+1) (u) i = P i + (u+1) V i (2) In the above equation, The term rand ( )*(pbest i -P (u) i ) is called particle memory influence The term rand ( )*( gbest i -P (u) i ) is called swarm influence. which is the velocity of I th particle at iteration u must lie in the range Vmin Vi (u) Vmax The parameter Vmax determines the resolution, or fitness, with which regions are to be searched between the present position and the target position.if Vmax is too high, particles may fly past good solutions. If Vmin is too small, particles may not explore sufficiently beyond local solutions. In many experiences with PSO, Vmax was often set at 10-20% of the dynamic range on each dimension. The constants C1and C2 pull each particle towards pbest and gbest positions. Low values allow particles to roam far from the target regions before being tugged back. On the other hand, high values result in abrupt movement towards, or past, target regions. The acceleration constants C1 and C2 are often set to be 2.0 according to past experiences Suitable selection of inertia weight w provides a balance between V i (u) global and local explorations, thus requiring less iteration on average to find a sufficiently optimal solution. In general, the inertia weight w is set according to the following equation W= x ITER (3) Where w -is the inertia weighting factor Wmax - maximum value of weighting factor Wmin - minimum value of weighting factor ITERmax - maximum number of iterations ITER - current number of iteration

IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.6, June 2013 127 2.3 PSO Flowchart solution. But the computational complexity of the algorithm increases linearly with population size, which is a motivation for other algorithm variants that are robust with smaller populations. -5-10 -15-20 -25-30 -35 0 20 40 60 80 100 120 140 160 180 200 (a) Population 200 2.4 PSO for Adaptive Filtering In adaptive filtering, the mean squared error (MSE) between the output of the unknown system and the output of the adaptive filter is the typical cost function, and will hence be used for the fitness evaluation of each particle in the on-line form of PSO. For an adaptive system identification configuration the windowed MSE cost function is as follows: 18 16 14 12 10 8 6 4 2 (4) y k,i (n)=f[x(n-k-1), x(n-k-2), x(n-k-l)] (5) 0 0 10 20 30 40 50 60 70 80 90 100 (b) Population 100 where f( ) is a nonlinear operator, N is the length of the window that the error is averaged and L is the amount of delay in the filter. The adaptive filter output yk(n) may also be a function of past values of itself if it contains feedback, or also a function of intermediate variables if the adaptive filter has a cascaded structure. When J(n) is minimized, the adaptive filter parameters provide the best possible representation of the unknown system. 2.5 Population Size 18 16 14 12 10 8 6 4 It is obvious that a larger population will always provide a better search and faster convergence on average, regardless of the complexity of the error surface, due to the increased number of estimates evaluated at each epoch. The result given in Figure indicates that an increased search capacity enables the algorithm to converge to a more precise 2 0 10 20 30 40 50 60 70 80 (c) Population 80

128 IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.6, June 2013 16 14 12 10 8 6 4 2 2 4 6 8 10 12 14 16 18 20 (d) Population 20 The vector w (n) = [w 0 (n) w 1 (n) w 2 (n).. w N-1 (n)] T represents the coefficients of the adaptive FIR filter tap weight vector at time n. The parameter μ is known as the step size parameter and is a small positive constant. This step size parameter controls the influence of the updating factor. Selection of a suitable value for μ is imperative to the performance of the LMS algorithm, if the value is too small the time the adaptive filter takes to converge on the optimal solution will be too long; if μ is too large the adaptive filter becomes unstable and its output diverges. 4. System Model & Fir Filter 3. LMS Algorithm Fig. 2 Effect of population Size in PSO 3.1 Basic Principle of LMS Algorithm The LMS algorithm is a type of adaptive filter known as stochastic gradient-based algorithms as it utilizes the gradient vector of the filter tap weights to converge on the optimal wiener solution [6]. It is well known and widely used due to its computational simplicity. It is this simplicity that has made it the benchmark against which all other adaptive filtering algorithms are judged. Update ++ d value of weight vector Old value Step Input = of weight + size * vector * vector Fig. 3 LMS Algorithm Error signal With each iteration of the LMS algorithm, the filter tap weights of the adaptive filter are updated according to the following formula- W (n+1)= W (n )+2µ e(n) x(n) (6) Here x (n) is the input vector of time delayed input values, x(n)=[x(n)x(n-1) x(n-n+1)] T (7) 4.1 Unknown System Model The unknown system will be modeled by a FIR system of length N. Both the unknown system and the FIR model are connected in parallel and exited by the same input sequence {x(n)}. If {y(n)} denotes the output of the model and {d(n)} denotes the output of the unknown system, the error sequence is {e(n)=d(n)-y(n)}.the unknown system is FIR system [7]. Signal Unknown System x(n) - FIR Filter - 66 y(n) h(n) µ µ LMS Fig. 4 Proposed Structure d(n) + e(n) PSO In this paper a new procedure to estimate the coefficients of an adaptive filter is proposed. It combines the Particle Swarm Optimization Algorithm with Least-Mean-Square (LMS) method, i.e. in each iteration of PSO, after the calculation of gbest, an LMS algorithm will be applied based on the previous gbest. In our structure, error signal is adjusted to the PSO block and PSO decides an appropriate step-size with less error value. Then selected step-size value is adjusted to the LMS block, and LMS block updates the coefficients simultaneously. The main advantage of PSO is that it can escape local minima, but is a slow process. LMS algorithm is a faster algorithm but may diverge in some cases or may remain in local minima and its results are not as accurate as PSO-based procedures. Our proposed approach combines the benefits of both

IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.6, June 2013 129 algorithms while accelerates the very slow rate of PSO and escapes from the local minima which may result from LMS [8]. 10 0 Error curve 10-1 5. Simulation Results Error value 10-2 10-3 -40 10-4 -45-50 10-5 0 50 100 150 200 250 300 350 400 450 500 Samples cost -55 Fig. 8 Error Curve for PSO Variable Step Size LMS -60-65 -70 0 20 40 60 80 100 120 140 160 180 200 6. Conclusion &Future Work 0.3 0.25 0.2 0.15 0.1 Fig. 5 Fitness function Comparison of the actual weights and the estimated weights Actual weights Estimated weights 0.05 0 1 2 3 4 5 6 Fig. 6 Comparison of actual weights and the estimated weights using fixed step size LMS& PSO variable step size LMS True and estimated output 1.4 1.2 1 0.8 0.6 0.4 0.2 System output 0 0 50 100 150 200 250 300 350 400 450 500 Samples Fig. 7 True & estimated outputs using PSO variable step size LMS The goal of this work is to expose PSO as a viable approach adaptive filtering problems, as well as to introduce and explore enhancements to the convergence properties of structured stochastic search algorithms in general. It becomes apparent that PSO is not only competitive with the conventional global search techniques, but are superior in many instances. As for structured stochastic techniques in general, in addition to avoiding local minima, they demonstrate convergence rates that are an order of magnitude faster (per input sample) than existing gradient based techniques. A topic that warrants further research is an assessment of the performance of these algorithms on modified error surfaces. It is evident that structured stochastic search algorithms perform better on surfaces exhibiting relatively few local minima, because the local attractors offer little interference to the search. By incorporating alternate formulations of the adaptive filter structures or performing data preprocessing, such as input orthognalization and power normalization, the error surface can be smoothed dramatically. These additions will likely result in improved performance of stochastic search algorithms. Related to this notion is the assertion that structured stochastic algorithms, particularly versions of PSO, have a tendency to excel on lower order parameter spaces. This is due partly to the fact that lower dimensional parameter spaces tend to exhibit fewer local minima in general, and because the volume of the hyperspace increases exponentially with each additional parameter to be estimated. Therefore it is hypothesized that dimensionality reduction techniques, such as implementing alternate formulations of adaptive filter structures that are able to accurately model unknown systems with minimum number of parameters, could benefit the overall performance.

130 IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.6, June 2013 In addition, further work is needed to extend and optimize the relationship and coordination of the various algorithm parameters and convergence mechanisms. Finally, the parallel hardware implementation of these algorithms needs to be investigated, which is essential in order for these algorithms to become more widely accepted in practice. of Technology, Guru Gobind Singh Indrapastha University, Delhi, India as Assistant Professor. He has published around 20 research papers in reputed Journals and International Conference. His research interest includes Digital Image Processing, Optimization, Nonlinear Statistical processing and Game Theory. References [1] Nambiar, R. and Mars, P., Genetic and Annealing Approaches to Adaptive Digital Filtering, Proc. 26th Asilomar Conf. on Signals, Systems, and Computers, vol. 2, pp. 871-875, Oct. 1992. [2] P. S. R. Diniz, Adaptive Filtering: Algorithms and Practical Implementations, Kluwer Academic Publishers, Boston, 1997. [3] Kennedy J and Eberhart R C (1995) Particle Swarm Optimization. In Proceedings of the IEEE International Conference Neural Networks, pp 1942-1948. [4] A.Abraham et al: Swarm intelligence: Foundations, perspectives and applications. Studies in Computational Intelligence(SCT)26,3-25(2006). [5] Z.Banjac, B.Kovacevic, M.Veinovic and M.Milosavljevic, Robust least mean square adaptive FIR filter algorithm IEEE 2001. [6] Crina Grosan and Ajith Abraham Intelligent Systems Springerliink Reference library,2011, Volume 17,345-386,DOI: 10.1007/978-3-642-21004-4_14 [7] Xiaochun Guan Qx-LMS Adaptive FIR Filters For System Identification Image and Siignal Processing,209. CISP 9, 2nd IEEE International Conference [8] Oscar Castillo, Oscar Montie1, Roberto Sepulveda, and Patricia Melin. Application of a Breeder Genetic Algorithm for System Identification in an Adaptive Finite Impulse Response Filter.IEEE 2001 Rajni was born in 1987 in Rohtak,Haryana. She received her B.Tech degree in 2008 in Electronics and Communication Engineering from Sh. Baba Mastnath College of Engineering,Maharishi Dayanand University,Rohtak,Haryana. She is currently a student of Master s degree program at Indira Gandhi Institute of Technology, Guru Gobind Singh Indraprastha University, Delhi, India. She has 4 years of teaching experience. Her research interest includes Metaheuristics,, Signal Processing, Image Processing. Akash Tayal received his B.Tech degree from Jamia Millia Islamia, Delhi, India and M.Tech degree from Netaji Subhash Institue of Technology, Delhi University, Delhi, India in 2002. He has over 9 years teaching experience. He is presently working with Department of Electronics and Communication Engineering, Indira Gandhi Institute