Performance Optimization of Image Fusion using Meta Heuristic Genetic Algorithm

Similar documents
Implementation & comparative study of different fusion techniques (WAVELET, IHS, PCA)

Survey on Multi-Focus Image Fusion Algorithms

Multi-focus Image Fusion Using Stationary Wavelet Transform (SWT) with Principal Component Analysis (PCA)

Region Based Image Fusion Using SVM

COMPARATIVE STUDY OF IMAGE FUSION TECHNIQUES IN SPATIAL AND TRANSFORM DOMAIN

Performance Evaluation of Fusion of Infrared and Visible Images

Fusion of Visual and IR Images for Concealed Weapon Detection 1

Visible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness

IMAGE ANALYSIS, CLASSIFICATION, and CHANGE DETECTION in REMOTE SENSING

Multi Focus Image Fusion Using Joint Sparse Representation

An Effective Multi-Focus Medical Image Fusion Using Dual Tree Compactly Supported Shear-let Transform Based on Local Energy Means

A Novel NSCT Based Medical Image Fusion Technique

CLASSIFICATION AND CHANGE DETECTION

Nonlinear Multiresolution Image Blending

Image Compression. -The idea is to remove redundant data from the image (i.e., data which do not affect image quality significantly)

PET AND MRI BRAIN IMAGE FUSION USING REDUNDANT WAVELET TRANSFORM


A Study and Evaluation of Transform Domain based Image Fusion Techniques for Visual Sensor Networks

ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N.

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT

Implementation of Image Fusion Algorithm Using Laplace Transform

Digital Image Processing

STUDY OF REMOTE SENSING IMAGE FUSION AND ITS APPLICATION IN IMAGE CLASSIFICATION

A Novel Pansharpening Algorithm for WorldView-2 Satellite Images

Image Analysis, Classification and Change Detection in Remote Sensing

Image Fusion Using Double Density Discrete Wavelet Transform

Region-based Image Fusion Method with Dual-Tree Complex Wavelet Transform. Xuanni Zhang 1, a, Fan Lu 2, b

Spectral Classification

International Journal of Engineering Research-Online A Peer Reviewed International Journal Articles available online

Change Detection in Remotely Sensed Images Based on Image Fusion and Fuzzy Clustering

Comparison of DCT, DWT Haar, DWT Daub and Blocking Algorithm for Image Fusion

Copyright 2005 Center for Imaging Science Rochester Institute of Technology Rochester, NY

An Approach for Reduction of Rain Streaks from a Single Image

ECG782: Multidimensional Digital Signal Processing

INTERNATIONAL JOURNAL OF GEOMATICS AND GEOSCIENCES Volume 2, No 2, 2011

Image Resolution Improvement By Using DWT & SWT Transform

Wavelet Applications. Texture analysis&synthesis. Gloria Menegaz 1

Texture Based Image Segmentation and analysis of medical image

A New Technique of Extraction of Edge Detection Using Digital Image Processing

Performance Evaluation of Biorthogonal Wavelet Transform, DCT & PCA Based Image Fusion Techniques

Novel Hybrid Multi Focus Image Fusion Based on Focused Area Detection

A Study on Multiresolution based Image Fusion Rules using Intuitionistic Fuzzy Sets

Medical Image Fusion Using Discrete Wavelet Transform

Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi

A Toolbox for Teaching Image Fusion in Matlab

Learning based face hallucination techniques: A survey

Robotics Programming Laboratory

CHAPTER 3 DIFFERENT DOMAINS OF WATERMARKING. domain. In spatial domain the watermark bits directly added to the pixels of the cover

Improved Multi-Focus Image Fusion

ECE 533 Digital Image Processing- Fall Group Project Embedded Image coding using zero-trees of Wavelet Transform

Final Review. Image Processing CSE 166 Lecture 18

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

IMAGE SUPER RESOLUTION USING NON SUB-SAMPLE CONTOURLET TRANSFORM WITH LOCAL TERNARY PATTERN

Denoising and Edge Detection Using Sobelmethod

Multimodal Medical Image Fusion Based on Lifting Wavelet Transform and Neuro Fuzzy

Color Local Texture Features Based Face Recognition

Performance Evaluation of Discrete Wavelet Transform & Genetic Algorithm in Image Fusion Techniques

Hyperspectral Image Enhancement Based on Sensor Simulation and Vector Decomposition

A HYBRID APPROACH OF WAVELETS FOR EFFECTIVE IMAGE FUSION FOR MULTIMODAL MEDICAL IMAGES

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

GEOBIA for ArcGIS (presentation) Jacek Urbanski

DESIGN OF A NOVEL IMAGE FUSION ALGORITHM FOR IMPULSE NOISE REMOVAL IN REMOTE SENSING IMAGES BY USING THE QUALITY ASSESSMENT

Multi-focus image fusion using de-noising and sharpness criterion

Learn From The Proven Best!

Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest.

Comparison of Digital Image Watermarking Algorithms. Xu Zhou Colorado School of Mines December 1, 2014

Image Fusion Based on Wavelet and Curvelet Transform

Image Contrast Enhancement in Wavelet Domain

Fundamentals of Digital Image Processing

Data Fusion. Merging data from multiple sources to optimize data or create value added data

IMAGE DE-NOISING IN WAVELET DOMAIN

Generate Digital Elevation Models Using Laser Altimetry (LIDAR) Data

Multi-Focus Medical Image Fusion using Tetrolet Transform based on Global Thresholding Approach

IMAGE COMPRESSION. Image Compression. Why? Reducing transportation times Reducing file size. A two way event - compression and decompression

Optimal Decomposition Level of Discrete, Stationary and Dual Tree Complex Wavelet Transform for Pixel based Fusion of Multi-focused Images

A Laplacian Based Novel Approach to Efficient Text Localization in Grayscale Images

IMAGE FUSION PARAMETER ESTIMATION AND COMPARISON BETWEEN SVD AND DWT TECHNIQUE

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

Segmentation of Images

IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY. Scientific Journal Impact Factor: (ISRA), Impact Factor: 2.

Research on Design and Application of Computer Database Quality Evaluation Model

Domain. Faculty of. Abstract. is desirable to fuse. the for. algorithms based popular. The key. combination, the. A prominent. the

Hyperspectral Image Segmentation using Homogeneous Area Limiting and Shortest Path Algorithm

A MAXIMUM NOISE FRACTION TRANSFORM BASED ON A SENSOR NOISE MODEL FOR HYPERSPECTRAL DATA. Naoto Yokoya 1 and Akira Iwasaki 2

Chapter 3: Intensity Transformations and Spatial Filtering

Integrated PCA & DCT Based Fusion Using Consistency Verification & Non-Linear Enhancement

University of Bristol - Explore Bristol Research. Peer reviewed version. Link to published version (if available): /ICIP.2005.

The method of Compression of High-Dynamic-Range Infrared Images using image aggregation algorithms

A Comparative Study of DCT, DWT & Hybrid (DCT-DWT) Transform

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Image denoising in the wavelet domain using Improved Neigh-shrink

TEXT DETECTION AND RECOGNITION IN CAMERA BASED IMAGES

FOOTPRINTS EXTRACTION

CHAPTER 4 DETECTION OF DISEASES IN PLANT LEAF USING IMAGE SEGMENTATION

An Approach for Image Fusion using PCA and Genetic Algorithm

PERFORMANCE ANALYSIS OF CONTOURLET-BASED HYPERSPECTRAL IMAGE FUSION METHODS

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 4, APRIL

IMAGE DIGITIZATION BY WAVELET COEFFICIENT WITH HISTOGRAM SHAPING AND SPECIFICATION

Face recognition based on improved BP neural network

A Modified SVD-DCT Method for Enhancement of Low Contrast Satellite Images

Transcription:

Performance Optimization of Image Fusion using Meta Heuristic Genetic Algorithm Navita Tiwari RKDF Institute Of Science & Technoogy, Bhopal(MP) navita.tiwari@gmail.com Abstract Fusing information contained in multiple images plays an increasingly important role for quality inspection in industrial processes as well as in situation assessment for autonomous systems and assistance systems. The aim of image fusion in general is to use images as redundant or complementary sources to extract information from them with higher accuracy or reliability. This dissertation describes image fusion in detail, and firstly intrudes the three basic levels which are pixel level, feature level and decision level fusion, and then compares with their properties and all other aspects. Then it describes the evaluation criteria of image fusion results from subjective evaluation and objective evaluation two aspects. According to the quantitative evaluation of the image fusion results and quality, this text uses and defines multiple evaluation parameters such as fusion image entropy, mutual information MI, the average gradient, standard deviation, cross-entropy, unite entropy, bias, relative bias, mean square error, root mean square error and peak SNR, and establishes the corresponding evaluation criteria. Keywords: - image fusion, wavelet transform, DCT, neural network, Genetic algorithm 1. Introduction With the continuous development of sensor technology, people have more and more ways to obtain images, and the image fusion types are also increasingly rich, such as the Image fusion of same sensor, the multi-spectral image fusion of single-sensor, the image fusion of the sensors with different types, and the fusion of image and non-image. Traditional data fusion can be divided into three levels, which are pixel-level fusion, feature-level fusion and decisionlevel fusion. The different fusion levels use different fusion algorithms and have different applications, generally, we all research the pixel-level fusion. Classical fusion algorithms include computing the average pixel-pixel gray level value of the source images, Laplacian pyramid, Contrast pyramid, Ratio pyramid, and Discrete Wavelet Transform (DWT). However, computing the average pixel-pixel gray level value of the source images method leads to undesirable side effects such as contrast reduction. The basic idea of DWT based methods is to perform decompositions on each source image, and then combine all these decompositions to obtain composite representation, from which the fused image can be recovered by finding inverse transform. This method is shown to be effective. However, wavelets transform can only reflect "through" edge characteristics, but cannot express "along" edge characteristics. At the same time, the wavelet transform cannot precisely show the edge direction since it adopts isotropy. According to the limitation of the wavelet transform, Donoho et al. was proposed the concept of Curvelet transform, which uses edges as basic elements, possesses maturity, and can adapt well to the image characteristics. Moreover, Curvelet Transform has anisotropy and has better direction, can provide more information to image processing [1-2]. Through the principle of Curvelet transform we know that: Curvelet transform has direction characteristic, and its base supporting session satisfies content anisotropy relation, except have multiscale wavelet transform and local characteristics. Curvelet transform can represent appropriately the edge of image and smoothness area in the same precision of inverse transform. The low-bands coefficient adopts NGMS method and different direction high-bands coefficient adopts LREMS method was proposed after researching transform. on fusion algorithms of the low-bands coefficient and high-bands coefficient in Curvelet 566

Figure 1: Process of image fusion algorithm based on Curvelet transform 1.1 Fusion Methods The following summarize several approaches to the pixel level fusion of spatially registered input images. Most of these methods have been developed for the fusion of stationary input images (such as multispectral satellite imagery). Due to the static nature of the input data, temporal aspects arising in the fusion process of image sequences, e.g. stability and consistency are not addressed. A generic categorization of image fusion methods is: 1.1.1 Linear Superposition The probably most straightforward way to build a fused image of several input frames is performing the fusion as a weighted superposition of all input frames. The optimal weighting coefficients, with respect to information content and redundancy removal, can be determined by a principal component analysis (PCA) of all input intensities. By performing a PCA of the covariance matrix of input intensities, the weightings for each input frame are obtained from the eigenvector corresponding to the largest eigenvalue. A similar procedure is the linear combination of all inputs in a pre-chosen colorspace (eg. R-G-B or H-S-V), leading to a false color representation of the fused image. 1.1.2 Nonlinear Methods Another simple approach to image fusion is to build the fused image by the application of a simple nonlinear operator such as max or min. If in all input images the bright objects are of interest, a good choice is to compute the fused image by an pixel-by-pixel application of the maximum operator. An extension to this approach follows by the introduction of morphological operators such as opening or closing. One application is the use of conditional morphological operators by the definition of highly reliable 'core' features present in both images and a set of 'potential' features present only in one source, where the actual fusion process is performed by the application of conditional erosion and dilation operators. A further extension to this approach is image algebra, which is a high-level algebraic extension of image morphology, designed to describe all image processing operations. The basic types defined in image algebra are value sets, coordinate sets which allow the integration of different resolutions and tessellations, images and templates. For each basic type binary and unary operations are defined which reach from the basic set operations to more complex ones for the operations on images and templates. Image algebra has been used in a generic way to combine multisensor images. 1.1.3 Optimization Approaches In this approach to image fusion, the fusion task is expressed as an bayesian optimization problem. Using the multisensor image data and an a-prori model of the fusion result, the goal is to find the fused image which maximizes the a-posteriori probability. Due to the fact that this problem cannot be solved in general, some simplifications are introduced: All input images are modeled as markov random fields to define an energy function which describes the fusion goal. Due to the equivalence of of gibbs random fields and markov random fields, this energy function can be expressed as a sum of so-called clique potentials, where only pixels in a predefined neighborhood affect the actual pixel. The fusion task then consists of a maximization of the energy function. Since this energy function will be nonconvex in general, typically stochastic optimization procedures such as simulated annealing or modifications like iterated conditional modes will be used. 1.1.4 Image Pyramids Image pyramids have been initially described for multiresolution image analysis and as a model for the binocular fusion in human vision. A generic image pyramid is a sequence of images where each image is constructed by low pass filtering and sub sampling from its predecessor. Due to sampling, the image size is halved in both spatial directions at each level of the decomposition process, thus leading to an multiresolution signal representation. The difference between the input image and the filtered image is necessary to allow an exact reconstruction from the pyramidal representation. The image pyramid approach thus leads to a signal representation with two pyramids: The smoothing pyramid containing the averaged pixel values, and the difference pyramid containing the pixel 567

differences, i.e. the edges. So the difference pyramid can be viewed as a multiresolution edge representation of the input image.the actual fusion process can be described by a generic multiresolution fusion scheme which is applicable both to image pyramids and the wavelet approach. There are several modifications of this generic pyramid construction method described above. Some authors propose the computation of nonlinear pyramids, such as the ratio and contrast pyramid, where the multistage edge representation is computed by an pixel-by-pixel division of neighboring resolutions. A further modification is to substitute the linear filters by morphological nonlinear filters, resulting in the morphological pyramid. Another type of image pyramid - the gradient pyramid - results, if the input image is decomposed into its directional edge representation using directional derivative filter The basic idea of the generic multiresolution fusion scheme is motivated by the fact that the human visual system is primary sensitive to local contrast changes, i.e. edges. Motivated from this insight, and in mind that both image pyramids and the wavelet transform result in an multiresolution edge representation, it is straightforward to build the fused image as a fused multiscale edge representation. The fusion process is summarized in the following: In the first step the input images are decomposed into their multiscale edge representation, using either any image pyramid or any wavelet transform. The actual fusion process takes place in the difference resp. wavelet domain, where the fused multiscale representation is built by a pixel-bypixel selection of the coefficients with maximum magnitude. Finally the fused image is computed by an application of the appropriate reconstruction scheme. 1.1.5 Wavelet Transform A signal analysis method similar to image pyramids is the discrete wavelet transform. The main difference is that while image pyramids lead to an over complete set of transform coefficients, the wavelet transform results in a nonredundant image representation. The discrete 2-dim wavelet transform is computed by the recursive application of lowpass and high pass filters in each direction of the input image (i.e. rows and columns) followed by sub sampling. Details on this scheme can be found in the reference section. One major drawback of the wavelet transform when applied to image fusion is its well known shift dependency, i.e. a simple shift of the input signal may lead to complete different transform coefficients. This results in inconsistent fused images when invoked in image sequence fusion. To overcome the shift dependency of the wavelet fusion scheme, the input images must be decomposed into a shift invariant representation. There are several ways to achieve this: The straightforward way is to compute the wavelet transform for all possible circular shifts of the input signal. In this case, not all shifts are necessary and it is possible to develop an efficient computation scheme for the resulting wavelet representation. Another simple approach is to drop the subsampling in the decomposition process and instead modify the filters at each decomposition level, resulting in a highly redundant signal representation. The actual fusion process can be described by a generic multiresolution fusion scheme which is applicable both to image pyramids and the wavelet approach. 1.1.6 Generic Multiresolution Fusion Scheme Figure 2: Basic Image Fusion Process Block Diagram 2. Related Work (Survey) 2.1 Low frequency coefficient fusion algorithm Curvelet transform is close to wavelet transform in low frequency region, image component including main energy decide image contour, so it can enhance effect of the image vision by correctly selecting low frequency coefficient. Existing fusion rule mostly have max pixel method, min pixel method, computing the average pixel-pixel gray level value of the source images method, LREMS method, local region deviation method [6]. Max pixel method, min pixel method and computing the average pixel-pixel gray level value of the source images method did not take into account local neighbor relativity each other, so fusion result can not get better effect; local region energy method and deviation method onside take into account local neighbor relativity each other, but did not take into account image edge and definition. Accounting to this lack, NGMS method was proposed in this paper, it mainly describes image detail and image in focus 568

grade. Eight local neighbor relativity sum of Laplacian algorithm was adopted to evaluate of Image definition, it defines as [9]: pixel level. Pixel-level image fusion structure as shown in figure 2: 2.2 High frequency coefficient fusion algorithm Curvelet transform have excessive direction characteristics, so can precisely express image edge orientation, and that region of high frequency coefficient namely express image edge detail information. Pixel absolute max method, LREMS method, local region deviation method, direction contrast method etc. was used in high frequency coefficient. LREMS method was adopted in this paper base on characteristics of Curvelet transform. Hypothesis image high frequency coefficient is CH, then fusion algorithm such as: Where CHA and CHB express Curvelet transform high frequency coefficient of image A and image B, CHF(x, y) show high frequency coefficient in pot(x, y) fusion high frequency coefficient, ECHA (x, y) show Curvelet transform high frequency coefficient of image A in pot(x, y) local region energy, ECHB (x, y) show Curvelet transform high frequency coefficient of image B in pot(x, y) local region energy. The images to participate the fusion may com from multiple image sensors with different types, also may from a single image sensor. The various images the single image sensor provided may come from different observation time or space (perspective), also may be the image with different spectral characteristics in the same time or space. The image after the pixel-level image fusion contains much richer, more accurate information content, which is conducive to the analysis and processing of image signal, makes it easier for people observation and more suitable for computer detection processing, it is the most important and the most fundamental multi-sensor image fusion method. Pixel-level image fusion advantage is a minimum loss of information, but it has the largest amount of information to be processed, the slowest processing speed, and a higher demand for equipment 2.3.2 Feature-level fusion Feature-level fusion is intermediate level, it is to carry out feature extraction (features can be the goal edges, direction, speed, etc.) for the original information of the various sensors, and then comprehensively analyze and 2.3 Image fusion different levels 2.3.1 Pixel-level fusion Pixel-level fusion is to fuse on the raw data layer with strict registration conditions, and carry out data integration and analysis before the raw data of various sensors being pre-processed. Pixel-level image fusion is the lowest level of image fusion, which is to keep more raw data as much as possible to provide rich and accurate image information other fusion levels cannot provide, so that the image will be easy to be analyzed and processed, such as image fusion, image segmentation and feature extraction, etc., so the image fusion rules researched in this paper are all based on the process the feature information. As shown in figure In general, the extracted feature information should be a sufficient statistic of the pixel information, and then multisensor data will be classified, collected and integrated according to the feature information. If the data the sensor obtained is image data, then the feature is abstractly extracted from the image pixel information, and the typical feature information has cable type, edge, texture, spectrum, similar brightness area, similar depth of field areas, etc., and then multi- 569

sensor image feature integration and classification will be achieved. Feature-level fusion advantage is that it achieved considerable compression of information, is conducive to real-time processing, and its fusion results can furthest give the feature information the decision analysis needed, which is because that the extracted features are directly related to the decision analysis. 2.3.3. Improved ihs-based fusion The basic idea of IHS fusion method is to convert a color image from the RGB (Red, Green, Blue) color space into the IHS (Intensity, Hue, Saturation) color space. One of them will be replaced by another image when we got the intensive information of both images. Then we convert IHS color space with H and S of being replaced image into RGB color space. See the following procedure: Step1: Transform the color space from RGB to IHS. where I v is intensity of visual image. R,G, B is color information of visual image respectively. V 1 and V 2 are components to calculate hue H and saturation S Where α,β are fused parameters. 0 α,β 1. 2.3.4 Artificial neural network Artificial neural network (ANN) has good advantage to estimate the relation between input and output when we could not know the relation of input and output, especially the relation is nonlinear. Generally speaking, ANN is divided into two parts. One is training, another is testing. During the training, we have to define training data and relational parameters. In the testing, we have to define testing data then get fused parameters. It has good ability to learn from examples and extract the statistical properties of the examples during the training procedure. Feature extraction is the important pre-procedure for ANN. In our case, we choice four feature, respectively, average intensity of visual image Mv, average intensity of infrared image Mi, average intensity of region in infrared image Mir and visibility Vi to present as input of ANN. The following is our introduction of features. The average intensity of visual image Mv : Step 2: The intensity component is replaced by intensity of infrared image Ii. Step 3: Transform the color space from IHS to RGB. where f v is visual gray image, H and W are height and width of visual image Generally speaking, it possible means the content of the image is shot in the daytime when Mv is larger. On the other hand, the content of the image is shot in the night. But it is initial assumption, not accurate. The average intensity of Mi is defined as follow: where I i is intensity of infrared image. R',G',B' is color information of fused image respectively. Because our basic idea is to add useful information of far infrared image to visual image. We set fused parameters in the matrix instead of the intensity of far infrared image Ii to replace the intensity of visual image I v. The fused parameters will be adjusted according different information of each region. The following formula is modified result where f i is infrared image, H and W are height and width of visual image. Generally speaking, it possible means the content of the image was shot in the daytime when Mi is larger. On the other hand, the content of the image was shot in the night. If we consider Mv and Mi to assume the shot night when M v and M i both are larger or smaller respectively. If M i is larger and M v is smaller then we can suppose that the highlight of infrared image could be useful information for us. If Mi is smaller and M v is larger then we can suppose that it 570

could be no useful information in the infrared image to add to visual image. The average intensity of region in M i is defined as follow We can start to define the training data and testing data when getting the four features. The Fig. 2 is one of our training data, they are visual image, infrared image and segmented infrared image respectively from left to right. We only segment the infrared image here. And we use color depth to represent each region. There are five level to represent five region. Table I is the integration of the features of each region from segmented infrared image. Each region from 1 to 5 is the color level from deep to shallow respectively. One region has four features. 3. Conclusion This dissertation describes an application of genetic algorithm to image fusion problem. We improve traditional IHS-method, wavelet, NN method and pattern matching method and add concept of regionbased into image fusion. The aim is that different regions can be used by different parameters in different state about time or weather. Due to the relation between environment and fused Parameters are nonlinear. So, we adopt artificial neural network to solve this problem. On the other hand, the fused parameters will be estimated automatically render us to get adaptive appearance in different states. The architecture we proposed is not only can be useful for many applications but also adapted for many kinds of field. In the next semester we have implemented this entire concept in MATLAB. 4. References [1] Z. Wang, D. Ziou, C. Armenakis, D. Li, and Q. Li, A Comparative Analysis of Image Fusion Methods, Geoscience and Remote Sensing, vol. 43, no. 6, pp. 1391-1402, June 2006. SPOT for three different sites in the Phoenix, Arizona region, Photogramm. Eng. Remote Sensing., vol. 54, no.12, pp. 1699-1708, 1988. [6] A. R. Gillespie, A. B. Kahle, and R. E. Walker, Color enhancement of highly Correlated images-_. Channel ratio and chromaticity transformation Techniques, Remote Sensing Environment, vol. 22, pp. 343-365, 1987. [7] J. Sun, J. Li and J. Li, Multi-source remote sensing image fusion, INT. J. Remote Sensing, vol. 2, no. 1, pp. 323-328, Feb. 1998. [8] W. J. Carper, T. M. Lillesand, and R. W. Kiefer, The use of Intensity- Hue- Saturation transformation for merging SPOT panchromatic and multispectral image data, Photogramm. Eng. Remote Sensing, vol. 56, no. 4, pp. 459-467, 1990 [9] K. Edwards and P. A. Davis, The use of Intensity-Hue- Saturation transformation for producing color shaded-relief images, Photogramm. Eng. Remote Sensing, vol. 60, no. 11, pp. 1369-1374, 1994. [10] E. M. Schetselaar, Fusion by the IHS transform: Should we use cylindrical or Spherical coordinates?, Int. J. Remote Sensing, vol. 19, no. 4, pp. 759-765, 1998. [11] J. Zhou, D. L. Civco, and J. A. Silander, A wavelet transform method to merge Landsat TM and SPOT panchromatic data, Int. J. Remote Sensing, vol. 19, no. 4, pp. 743-757, 1998. [12] S. Li, J. T. Kwok, Y. Wang, Multifocus image fusion using artificial neural networks, Pattern Recognition Letters, vol. 23, pp. 985-997, 2002. [13] Q. Yuan, C.Y. Dong, Q. Wang, An adaptive fusion algorithm based on ANFIS for radar/infrared system, Expert Systems with Applications, vol. 36, pp. 111-120, 2009. [2] J. G. Liu, Smoothing filter-based intensity modulation: A spectral preserve image fusion technique for improving spatial details, Int. J. Remote Sensing, vol. 21, no. 18, pp. 3461-3472, 2000. [3] M. Li, W. Cai, and Z. Tan, A region-based multi-sensor image fusion scheme using pulse-coupled neural network, Pattern Recognition Letters, vol. 27, pp. 1948-1956, 2006. [4] L. J. Guo and J. M. Moore, Pixel block intensity modulation: adding spatial detail to TM band 6 thermal imagery, Int. J. Remote Sensing., vol. 19, no. 13, pp. 2477-2491, 1988. [5] P. S. Chavez and J. A. Bowell, Comparison of the spectral information content of Landsat thematic mapper and 571