An IHS based Multiscale Pansharpening Technique using Wavelet based Image Fusion and Saliency Detection

Similar documents
Region Based Image Fusion Using SVM

A Novel Pansharpening Algorithm for WorldView-2 Satellite Images

Medical Image Fusion Using Discrete Wavelet Transform

Multi-Focus Medical Image Fusion using Tetrolet Transform based on Global Thresholding Approach

Copyright 2005 Center for Imaging Science Rochester Institute of Technology Rochester, NY

THE ever-increasing availability of multitemporal very high

CURVELET FUSION OF MR AND CT IMAGES

Multi-focus Image Fusion Using Stationary Wavelet Transform (SWT) with Principal Component Analysis (PCA)

An Optimal Gamma Correction Based Image Contrast Enhancement Using DWT-SVD

A Toolbox for Teaching Image Fusion in Matlab

Performance Evaluation of Biorthogonal Wavelet Transform, DCT & PCA Based Image Fusion Techniques

International Journal of Engineering Research-Online A Peer Reviewed International Journal Articles available online

Improved Multi-Focus Image Fusion

Multi Focus Image Fusion Using Joint Sparse Representation

Visible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness

Survey on Multi-Focus Image Fusion Algorithms

Image Fusion Using Double Density Discrete Wavelet Transform

PET AND MRI BRAIN IMAGE FUSION USING REDUNDANT WAVELET TRANSFORM

Satellite Image Processing Using Singular Value Decomposition and Discrete Wavelet Transform

Multiband Pan Sharpening Using Correlation Matching Fusion Method. Morris Akbari, Alex Kachurin, Mark Rahmes, Joe Venezia

A Study and Evaluation of Transform Domain based Image Fusion Techniques for Visual Sensor Networks

Wavelet for Image Fusion

Image Compression. -The idea is to remove redundant data from the image (i.e., data which do not affect image quality significantly)

COMPARATIVE STUDY OF IMAGE FUSION TECHNIQUES IN SPATIAL AND TRANSFORM DOMAIN

IMAGE DIGITIZATION BY WAVELET COEFFICIENT WITH HISTOGRAM SHAPING AND SPECIFICATION

Learn From The Proven Best!

MULTI-FOCUS IMAGE FUSION USING GUIDED FILTERING

Domain. Faculty of. Abstract. is desirable to fuse. the for. algorithms based popular. The key. combination, the. A prominent. the

with Color Distortion Reduction for IKONOS Imagery

Image Fusion with Local Spectral Consistency and Dynamic Gradient Sparsity

Contourlet-Based Multispectral Image Fusion Using Free Search Differential Evolution

Image Contrast Enhancement in Wavelet Domain

A Novel Technique for Optimizing Panchromatic and Multispectral image fusion using Discrete Wavelet Transform

Data Fusion. Merging data from multiple sources to optimize data or create value added data

ENHANCED IMAGE FUSION ALGORITHM USING LAPLACIAN PYRAMID U.Sudheer Kumar* 1, Dr. B.R.Vikram 2, Prakash J Patil 3

Image Fusion Based on Medical Images Using DWT and PCA Methods

Motion Blur Image Fusion Using Discrete Wavelate Transformation

Image Denoising Methods Based on Wavelet Transform and Threshold Functions

Image denoising using curvelet transform: an approach for edge preservation

Fusion of Multimodality Medical Images Using Combined Activity Level Measurement and Contourlet Transform

Performance Optimization of Image Fusion using Meta Heuristic Genetic Algorithm

Fusion of multispectral and panchromatic data using regionally weighted principal component analysis and wavelet

A HYBRID APPROACH OF WAVELETS FOR EFFECTIVE IMAGE FUSION FOR MULTIMODAL MEDICAL IMAGES

An Effective Multi-Focus Medical Image Fusion Using Dual Tree Compactly Supported Shear-let Transform Based on Local Energy Means

Feature Based Watermarking Algorithm by Adopting Arnold Transform

Saurabh Tiwari Assistant Professor, Saroj Institute of Technology & Management, Lucknow, Uttar Pradesh, India

Multispectral Image Fusion using Integrated Wavelets

Medical Image Fusion using Rayleigh Contrast Limited Adaptive Histogram Equalization and Ant Colony Edge Method

Comparative Study of Frequency Vs Spacial Domain for Multi Sensor Image Fusion

PERFORMANCE COMPARISON OF DIFFERENT MULTI-RESOLUTION TRANSFORMS FOR IMAGE FUSION

Pixel-Level Image Fusion Using Wavelet Transform Mrs.S.V.More 1, Prof.Dr.Mrs.S.D.Apte 2

CHAPTER 3 WAVELET DECOMPOSITION USING HAAR WAVELET

Hyperspectral Image Enhancement Based on Sensor Simulation and Vector Decomposition

Integrated PCA & DCT Based Fusion Using Consistency Verification & Non-Linear Enhancement

HYPERSPECTRAL PANSHARPENING USING CONVEX OPTIMIZATION AND COLLABORATIVE TOTAL VARIATION REGULARIZATION

A Survey on Multiresolution Based Image Fusion Techniques

Change Detection in Remotely Sensed Images Based on Image Fusion and Fuzzy Clustering

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 4, APRIL

Compressive sensing image-fusion algorithm in wireless sensor networks based on blended basis functions

University of Bristol - Explore Bristol Research. Peer reviewed version. Link to published version (if available): /ICIP.2005.

The method of Compression of High-Dynamic-Range Infrared Images using image aggregation algorithms

An Adaptive Multi-Focus Medical Image Fusion using Cross Bilateral Filter Based on Mahalanobis Distance Measure

Image Resolution Improvement By Using DWT & SWT Transform

Comparison of Fusion Algorithms for Fusion of CT and MRI Images

Image Denoising using SWT 2D Wavelet Transform

NSCT domain image fusion, denoising & K-means clustering for SAR image change detection

A Parallel Computing Paradigm for Pan-Sharpening Algorithms of Remotely Sensed Images on a Multi-Core Computer

Multi-Sensor Fusion of Electro-Optic and Infrared Signals for High Resolution Visible Images: Part II

Performance Evaluation of Fusion of Infrared and Visible Images

Implementation & comparative study of different fusion techniques (WAVELET, IHS, PCA)

A Color Image Enhancement based on Discrete Wavelet Transform

PanNet: A deep network architecture for pan-sharpening

Spectral or spatial quality for fused satellite imagery? A trade-off solution using the wavelet à trous algorithm

Resolution Magnification Technique for Satellite Images Using DT- CWT and NLM

TERM PAPER ON The Compressive Sensing Based on Biorthogonal Wavelet Basis

IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY. Scientific Journal Impact Factor: (ISRA), Impact Factor: 2.

WAVELET SHRINKAGE ADAPTIVE HISTOGRAM EQUALIZATION FOR MEDICAL IMAGES

HYBRID TRANSFORMATION TECHNIQUE FOR IMAGE COMPRESSION

Hybrid PCA-DCT Based Image Fusion For Medical Images

PRINCIPAL COMPONENT ANALYSIS IMAGE DENOISING USING LOCAL PIXEL GROUPING

1418 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 8, AUGUST 2014

IMAGE FUSION PARAMETER ESTIMATION AND COMPARISON BETWEEN SVD AND DWT TECHNIQUE

Comparative Evaluation of DWT and DT-CWT for Image Fusion and De-noising

Image Fusion Using Kuwahara Filter

A Comparative Analysis of Different Image Fusion Techniques

Optimal Decomposition Level of Discrete, Stationary and Dual Tree Complex Wavelet Transform for Pixel based Fusion of Multi-focused Images

Robust biometric image watermarking for fingerprint and face template protection

Feature extraction. Bi-Histogram Binarization Entropy. What is texture Texture primitives. Filter banks 2D Fourier Transform Wavlet maxima points

arxiv: v1 [cs.cv] 22 Jun 2014

CONTRAST ENHANCEMENT OF COLOR IMAGES BASED ON WAVELET TRANSFORM AND HUMAN VISUAL SYSTEM

Implementation of Image Fusion Algorithm Using Laplace Transform

Comparison of DCT, DWT Haar, DWT Daub and Blocking Algorithm for Image Fusion

Spectral Classification

A Novel NSCT Based Medical Image Fusion Technique

CHAPTER-6 WATERMARKING OF JPEG IMAGES

Research Article Multiscale Medical Image Fusion in Wavelet Domain

HUMAN development and natural forces continuously

Image Watermarking with Biorthogonal and Coiflet Wavelets at Different Levels

Hyperspectral and Multispectral Image Fusion Using Local Spatial-Spectral Dictionary Pair

CLASSIFICATION AND CHANGE DETECTION

Transcription:

An IHS based Multiscale Pansharpening Technique using Wavelet based Image Fusion and Saliency Detection Shruti Faculty, PEC University of Technology, Chandigarh Abstract With the advent of remote sensing, the requirement of high resolution images that can provide high quality spectral and spatial information has increased. Due to cost and circuitry issues, the satellite sensors are capable of providing either multispectral (MS) images with high spectral and low spatial resolution or Panchromatic (pan) images with high spatial and low spectral resolution. Pansharpening of MS images is used to increase spatial resolution of MS image, using high spatial resolution pan image by fusion of MS and pan images. Several pansharpening techniques have been discussed in the literature to obtain pansharpened images with high spatial as well as spectral resolution. Among them, Intensity-Hue-Saturation (IHS), Principal Component Analysis (PCA) and Wavelet Transform based pansharpening are most commonly used techniques, as IHS and PCA provide high spatial resolution and Wavelet Transform provides high spectral resolution. In this paper, a novel pansharpening technique has been proposed which aims at integrating the advantages of IHS and Wavelet based pansharpening for providing a pansharpened image with high spatial resolution, while preserving the spectral resolution of MS image. The proposed technique is an IHS and Wavelet based multiscale image decomposition fusion technique that uses visual saliency detection for extraction of details and edge information from MS and pan images. The proposed techniques has been implemented on several image datasets containing low and very low resolution MS images and the results have been analyzed and compared with the existing techniques. The simulation results and analysis shows that the proposed pansharpening technique outperforms existing techniques by providing superior pansharpened MS images. Introduction Image fusion is a technique of combining the information of two or more input images into a single image. It is used to get an enhanced perception of a scene using a single image, rather than multiple images. The need of image fusion arises when the images do not contain the sufficient information independently, due to some distortions or degradations. The image may be optically or statistically degraded due to certain reasons like, (a) the image sensor used to capture the images might be a low resolution sensor, (b) the sensor might not be capable of capturing color images, (c) the sensor or the object to be sensed might be in motion, (d) the sensor might be able to capture only one kind of information (e) the image might have undergone contrast decrease or sharpness reduction, (f) the images might consist of some halo artifacts, blocking artifacts, artificial edges, ringing effects and so on [1]. This raises the need of image fusion for different image modalities like multisensor images, multimodal images, multispectral images, multifocus images and multitemporal images. The image fusion is applicable to a number of fields where the decision making is based upon analysis of images of a scene like in medicine, remote sensing, satellite imaging, military and civilian surveillance, object detection and recognition, land use, navigation guidance and so on [2]. This paper deals with remote sensing applications of image fusion. Due to limitations in available circuitry, the remote sensing sensors are not capable of capturing images which concurrently have high spatial as well as spectral resolution. It either captures images with high spectral resolution and low spatial resolution, which are called multispectral (MS) images or images with high spatial resolution and low spectral resolution, which 199 Shruti

are called panchromatic (pan) images. Therefore, the MS images are fused with pan images so as to increase the spatial resolution of MS images. The process of enhancing the spatial resolution of MS images using pan images is called pansharpening of MS images. The objective of pansharpening of MS image is to increase the spatial resolution of MS image while preserving its spectral information. Thus the resultant pansharpened image should have high spatial as well as spectral resolution comparable to that of pan image and MS image respectively [3]. The pansharpening techniques can be broadly classified into two categories, Component Substitution (CS) techniques and Multi-resolution Analysis (MRA) techniques [4]. CS methods are based on separation of spatial and spectral details of MS image and then replacing the spatial details of MS image by pan image. CS methods include Intensity Hue Saturation (IHS) based techniques, Principal Component Analysis (PCA), Gram Schmidt method, Band Dependent Spatial Detail method and Brovey method [5][6]. MRA methods are based on multiresolution decomposition of MS and pan images using Wavelet Transform, Laplacian Pyramid, Curvelet Transform etc and then fusing the corresponding levels using different fusion rules like mean, choose max, choose min, etc. The fused levels are then reconstructed using inverse transforms to get the fused image. The MRA methods include Additive Wavelet Transform Method, Generalized Laplacian Pyramid (GLP), Additive Wavelet Luminance Proportion (AWLP) Method, Curvelet transform based method and High Pass Filtering method [7] [8]. There are also some methods like Variational method P+XS and the methods based on statistical estimation which doesn t associate with any of the above categories [9] [10]. Different techniques can be used for pansharpening of MS images depending on the requirements and application of pansharpened images. In this paper, a novel multiscale pansharpening technique based on CS and MRA methods has been proposed which uses visual saliency detection. The CS technique used is IHS based pansharpening as it provides very high spatial resolution of pansharpened image. But it also offers some spectral distortion. Thus, the Wavelet based fusion technique has been used which is a MRA pansharpening technique that provides excellent spectral resolution. The saliency detection has been used to extract the significant details and edge information from MS and pan images. In this paper, the existing image fusion techniques have been discussed in detail in section II and the proposed pansharpening technique has been given in section III. Section IV consists of simulation results and analysis of proposed technique, and its comparison with existing techniques. A conclusion has been drawn in section V. Previous work IHS pansharpening technique The computer systems generally use RGB (Red-Green-Blue) color space to display the color information. IHS (Intensity-Hue-Saturation) is another efficient color space used to represent the color images, which separates the spectral and spatial information. In IHS color space, the Intensity component is gives the spatial details of an image and the Hue and Saturation components give the spectral information. 200 Shruti Fig. 1. Intensity-Hue-Saturation based image pansharpening Technique

In pansharpening of MS images, the color information is present in MS image, that is further present in Hue and Saturation component of MS image, and the spatial information is mostly present in pan image and the MS image contains relatively very little spatial information. So, the final pansharpened image should consist of color information of Hue and Saturation component of MS image and the spatial information should be comparable to that of pan image. Thus, in IHS based pansharpening method, the principal of Component Substitution is applied [6]. The MS image is first converted to IHS color space. The histogram of Intensity component of MS image is matched with that of pan image and the Intensity component of MS image is replaced by pan image, keeping the Hue and Saturation components as it is, and the resultant image is then converted back to RGB image using inverse transformations [11]. The IHS based pansharpening algorithm has been shown in Figure 1. The IHS based pansharpening techniques like GIHS, FIHS, adaptive IHS, FSRF, and Variational IHS have also been used in the field of remote sensing [6], [10] [15]. Wavelet Transform based pansharpening technique Wavelet Transform is a multiresolution transform applied on images to decompose images into low and high frequency components. The low frequency components are called approximation coefficients and contain the background and color information of an image and the high frequency components are called details components and contain small scale variations and spatial information [16]. Different Wavelet Transforms like Haar Wavelets, Daubechies Wavelets, Coiflets Wavelets, Coifmann Wavelets and Dual Tree Complex Wavelets can be applied at different levels. Level-1 decomposition using Wavelet Transform decomposes an image into four levels, say, LL1, LH1, HL1, and HH1, where LL1 corresponds to approximation coefficient and LH1, HL1, and HH1 correspond to detail coefficients. If the level of decomposition is increased the level LL1 is further decomposed into LL2, LH2, HL2, and HH2 and so on. The concept of multilevel decomposition using level-2 Wavelet Transform has been shown in Figure 2. The mathematical expression for wavelet transform is given by equation () [17]. + F(x, y) = f(x) Ψ (a,b) (x)dx Fig. 2. Level-2 Wavelet decomposition In Wavelet Transform based image fusion, the images are first decomposed low and high frequency levels using Wavelet Transform. The different levels are then fused individually using some fusion rules like choose min, choose max, mean, choose image1, choose image2 etc. The fused levels are then combined to reconstruct a final fused image using inverse Wavelet Transform [18] [19]. The Wavelet Transform based pansharpening algorithm has been shown in Figure 3. The commonly used Wavelet based fusion techniques are AW, AWPC, 201 Shruti

AWI, AWL, WiSpeR and Wavelet based Bayesian fusion, Wavelet based Variational method[15], [16], [18] [23]. Fig. 3. Wavelet Transform based image pansharpening Technique Saliency detection based pansharpening technique Saliency detection of an image is the process of identifying the significant information that seems to be prominent and more useful that the neighbors. It generally consists of fine details and edge information that is interpretable by human eye [24]. In the method of saliency detection based image fusion, the saliency maps are created for both the input images, which represent the significant information in images[25]. The saliency map is then used for effective fusion of images. The existing saliency detection based image fusion method has been shown in Figure 4 and the algorithm has been explained below. 202 Shruti Fig. 4. Saliency Detection based image pansharpening Technique

1. The original images are decomposed into two layers, called base layer and detail layer. Base layer consists of large scale variations and the detail layer consists of small scale information. 2. The visual saliency of all the input images is detected individually by creating saliency maps that represent the salient information of respective images. 3. The images to be fused generally consist of complementary information or a particular image might consist of more significant information at a particular pixel location than the other. Therefore, it has to be determined that which image will provide information at a particular point in fused image so as to fuse the information of all the images effectively. This is done by creating respective weight maps of images by using saliency maps. Higher weight is assigned to a pixel having more significant information and lower weight will be assigned to the corresponding pixel at same pixel location in another image and vice versa. 4. The fusion of detail layer is carried out by multiplying the detail layer with respective weight maps and then adding the products to provide a detail layer which consists of information of all the detail layers. 5. Base layer is fused using mean rule. 6. The fused base layer and fused detail layer are combined to reconstruct the final fused image. PROPOSED PANSHARPENING TECHNIQUE Pansharpening of MS image is required in remote sensing because of the limitations in capabilities of the satellite sensors. The images captured by satellite sensors have either high spatial resolution or spectral resolution, but for effective interpretation, the images must have high spatial as well as spectral resolution. MS images have high spectral resolution and their spatial resolution is increased by method of pansharpening, thus providing a high spatial as well as spectral resolution pansharpened MS image. Many CS and MRA based pansharpening techniques have been discussed in literature, providing good pansharpened images, but each one has its own limitation. The CS family methods like IHS based pansharpening provides high spatial resolution but offer some spectral distortion whereas the MRA family methods like Wavelet based pansharpening method preserves the spectral information of MS image but does not provide high spatial resolution. In this paper a novel multiscale pansharpening approach based on both IHS based and Wavelet based pansharpening algorithms and using the visual saliency detection has been presented. The integration of IHS based and Wavelet based techniques is being done to get good spatial resolution, while preserving the spectral resolution of MS image as IHS based pansharpening will provide high spatial resolution and the Wavelet based pansharpening will provide high spectral resolution [6], [19]. The saliency detection is being used as it helps to identify and extract the visually significant information of an image [25]. Fig. 4. Multiscale decomposition based image fusion Technique 203 Shruti

The proposed pansharpening algorithm is a multiscale decomposition fusion algorithm. Multiscale decomposition image fusion consists of three broad steps: image analysis, image fusion and image synthesis [26]. Initially, the image components to be fused are decomposed into multiple levels. The different levels are classified into base layers or approximation coefficients and detail layers or detail coefficients. The base layer coefficients consist of background information and other large scale variations and the detail layer coefficients consist of edge information and other small scale variations. This is called image analysis. The corresponding layers of input images are fused individually using different fusion methods. Then the fused layers are combined to reconstruct a final fused image. This is called image synthesis. The multiscale decomposition image fusion has been demonstrated in Figure 5. The proposed IHS and Wavelet based multiscale decomposition pansharpening algorithm using saliency detection has been implemented using two scale image decomposition. The detailed algorithm has been shown in Figure 6. The MS image is first converted from RGB to IHS color space and then the Intensity component of MS image is fused with pan image using multiscale decomposition image fusion and the Hue and Saturation components of MS image are kept unchanged. The fusion of corresponding layers of Intensity component of MS image and the pan image is carried out using saliency detection and Wavelet Transform based image fusion technique. The fused component is then combined with the Hue and Saturation component of MS image and the final pansharpened MS image is obtained using IHS to RGB conversion process. Fig. 6. Proposed Pansharpening Technique The proposed pansharpening algorithm has been implemented using the following steps. 1. RGB to IHS conversion of MS image The MS image is converted from RGB to IHS color space and the pan image, being a grayscale image is kept as it is. The transformation of image from RGB to IHS color space is carried using a set of formulae which are represented by equations (1-4). I = (R + G + B) 3 (1) H = { θ, for B < G 2π θ, for B > G } (2) 204 Shruti where, θ = cos 1 0.5 ((R G)+(R B)) (R G) 2 +(R B) (G B) (3)

3(min (R,G,B)) S = 1 (R+G+B) 2. Two scale image decomposition The pan image and the Intensity component of MS image are used to get the spatial information and the spectral information will remain preserved in Hue and Saturation components of MS image. The Intensity component of MS image I 1(x,y) and the pan image I 2(x,y) are decomposed at two-scale decomposition using mean filtering approach. The images I 1(x,y) and I 2(x,y) are decomposed into two layers, called base layer and detail layer. Base layer consists of large scale variations or background information and the detail layer consists of small scale information or fine details. The base layer is obtained by mean filtering the original image as the mean filter will remove the small scale variations from input image by performing the smoothening operation and will provide the image with large scale variations only. The detail layer is obtained by subtracting the base layer from the original image. The base layers B 1(x,y) and B 2(x,y) of I 1(x,y) and I 2(x,y) will be given by equations (5) and (6). B 1 (x, y) = I 1 (x, y) Ω(x, y) (5) B 2 (x, y) = I 2 (x, y) Ω(x, y) (6) where Ω(x,y) is the mean filter of window size λ Ω. The detail layers, D 1(x,y) and D 2(x,y) of I 1(x,y) and I 2(x,y) will be given by (7) and (8). D 1 (x, y) = I 1 (x, y) B 1 (x, y) (7) D 2 (x, y) = I 2 (x, y) B 2 (x, y) (8) The Hue and Saturation components of MS image are kept unchanged. 3. Saliency detection Saliency detection refers to extraction of significant information of an image like objects, lines, persons, or edges. The saliency maps of I component of MS image and the pan image are constructed by using mean and median filters. The mean filter preserves the background information and blurs the details or edges in an image by performing smoothening operation where as the median filter preserves important background as well as edge information, while removing the noise and artifacts. So, the difference between the mean and median filtered images will give the edge and other important visually salient information. If Ω(x,y) and ζ(x,y) are mean and median filters of window sizes λ Ω and λ ζ, respectively, then the output of mean and median filters will be given by 9-12. I 1 Ω (x, y) = I 1 (x, y) Ω(x, y) (9) I 2 Ω (x, y) = I 2 (x, y) Ω(x, y) (10) I 1 ζ (x, y) = I 1 (x, y) ζ(x, y) (11) I 2 ζ (x, y) = I 2 (x, y) ζ(x, y) (12) where I Ω 1(x,y) and I Ω 2(x,y) are outputs of mean filters and I ζ 1(x,y) and I ζ 2(x,y) are outputs of median filters for I 1(x,y) and I 2(x,y) respectively. The saliency maps are given by (13) and (14). S 1 (x, y) = I 1 Ω (x, y) I 1 ζ (x, y) (13) S 2 (x, y) = I 2 Ω (x, y) I 2 ζ (x, y) (14) where S 1(x,y) and S 2(x,y) are saliency maps of I 1(x,y) and I 2(x,y) respectively and represents the absolute value. 4. Construction of weight maps It has to be determined that which image is to be used to extract information at a particular pixel location in fused image so as to effectively fuse the significant information of original input images into a single image. This is done by creating respective weight maps W 1(x,y) and W 2(x,y) of images I 1(x,y) and I 2(x,y) on basis of (4) 205 Shruti

206 Shruti International Journal of Engineering Technology Science and Research saliency maps [25]. Weight maps are constructed using saliency maps S 1(x,y) and S 2(x,y) as given by (15) and (16). W 1 (x, y) = W 2 (x, y) = S 1 (x,y) S 1 (x,y)+s 2 (x,y) S 2 (x,y) S 1 (x,y)+s 2 (x,y) It is clear from the formulae that the value of weights will lie between 0 and 1 and the sum of corresponding weights of D 1(x,y) and D 2(x,y) at same pixel location will be equal to 1. This means if a higher weight is assigned to a pixel having more significant information, then lower weight will be assigned to the corresponding pixel at same pixel location in another image and vice versa. Since the pan image consists of comparatively larger amount of spatial details, the weights assigned to pan image will be higher than that of Intensity component of MS image. 5. Assignment of weights The weight maps of images are multiplied with their respective detail layers. The weights are applied to detail layer because the detail layer consists of most of the visible information like edges, lines, human beings, buildings, etc and the base layer consists of only background information. Thus, to efficiently obtain the spatial information from I 1(x,y) and I 2(x,y), the detail layers are considered. The weighted detail layers WD 1(x,y) and WD 2(x,y) for I 1(x,y) and I 2(x,y) are obtained using the equations 17-18. WD 1 (x, y) = D 1 (x, y) W 1 (x, y) (17) WD 2 (x, y) = D 2 (x, y) W 2 (x, y) (18) 6. Detail layer fusion The weighted detail layers are fused using Wavelet based image fusion technique. WD 1(x,y) and WD 2(x,y) are decomposed into approximation and details coefficients using Discrete Wavelet Transform (DWT). In this paper, Daubechies Wavelet has been used at level-2. The corresponding approximation coefficients are fused using max rule and the details coefficients are fused using the mean rule. The fused coefficients are combined and fused weighted detail layer is obtained by reconstruction using Inverse Discrete Wavelet Transform (IDWT). The fused layer thus obtained, F D(x,y), is final fused detail layer of I 1(x,y) and I 2(x,y). 7. Base layer fusion The base layers B 1(x,y) and B 2(x,y) are fused using Wavelet based image fusion in the similar way. B 1(x,y) and B 2(x,y) are decomposed into approximation and details coefficients using Daubechies Wavelet at level-2. The corresponding approximation coefficients are fused using max rule and the details coefficients are fused using the mean rule. The fused coefficients are combined and fused base layer is obtained by reconstruction using IDWT. The fused layer thus obtained, F B(x,y), is final fused base layer of I 1(x,y) and I 2(x,y). 8. Two scale image reconstruction The fused base layer F B(x,y) and the fused detail layer F D(x,y) are combined using two scale image reconstruction as shown in equation (19). F(x, y) = F B (x, y) + F D (x, y) (19) where F(x,y) is fused image component. The image component F(x,y) contains all the spatial information of MS image and pan image as it consists details as well as background information of I 1(x,y) and I 2(x,y). 9. Component substitution The fused component F(x,y) is now replaced with the Intensity component of MS image and the Hue and Saturation components are kept as it is, because there no color information in pan image and the Hue and Saturation components of MS image represent entire color information. 10. IHS to RGB conversion The pansharpened image in IHS plane is converted back to RGB plane using inverse transformations and the final fused image is obtained, that is a pansharpened MS image. (15) (16)

RESULTS AND DISCUSSIONS EXPERIMENTAL SETUP Image Datasets The proposed pansharpening algorithm has been implemented using several datasets consisting of MS and pan images. The datasets used consist of low as well as very low spatial resolution MS images. The results have been shown using two datasets. Dataset 1 consists of low resolution MS image and high resolution pan image taken from AVIRIS dataset [27] whereas dataset 2 consists of very low resolution MS image and a high resolution pan image taken from HYDICE image dataset [28]. The experiments have been executed using MATLAB version R2013a on a Windows personal computer having Intel(R) 2.16 GHz CPU and 4 GB RAM. Image Analysis The subjective as well as objective analysis of results has been done. The subjective analysis contains qualitative measure of images that is done by visual inspection. The objective analysis is done quantitatively with the help of several image quality metrics which demonstrate the quality of the fused image [29]. The image analysis is generally done using a reference image for comparison but in case of pansharpening of MS images, no reference image is available, so the MS image is taken as the reference image. Evaluation Parameters The following quality metrics have been used analyze and compare the quality of proposed pansharpening technique. 1) Correlation coefficient: Correlation coefficient(cc) is used to measure the amount of linear dependence between reference image and fused image [5]. The value of spectral correlation coefficient represents the amount of correlation between MS and fused image and the value of spatial correlation coefficient represents the amount of correlation between Pan and fused image. Its value lies between [-1, 1]. For better quality of fused image, the value of correlation coefficient should be equal to 1. CC = M N i=1 j=1(x(i,j) X )(Y(i,j) Y ) M N (X(i,j) X ) 2 M i=1 j=1 i=1 N j=1(y(i,j) Y ) 2 where X and Y are reference image and fused image respectively. 2) Entropy: The measure of entropy (E) of an image gives the amount of information carried by an image [8]. Larger value of entropy represents better image quality. M N E = i=1 j=1 p(i, j)log 2 p(i, j) (18) where p(i,j) is probability of occurrence of pixel Y(i,j) in output image. 3) Universal Image Quality Index: Universal image quality index (UIQI) represents the quality of an image in terms of product of correlation, luminance and contrast [30]. Its value lies between [-1, 1]. For better quality of fused image, the value of UIQI should be equal to 1. UIQI = σ xy 2x y 2σ xσ y σ x σ y (x ) 2 +(y ) 2 σ x 2 +σ y 2 (19) 4) Spectral Disperancy: Spectral Disperency (SPD) gives the measure of spectral qualiaty of fused image. Small value of SPD represent better spectral quality of an image [19]. M N i=1 j=1 Y(i, j) X(i, j) SPD = MN 5) Spectral Angle Mapper: Spectral Angle Mapper (SAM) gives the measure of spectral angle between fused image and reference MS image. Its value is calculated in degrees and lower value represents better spectral quality [31]. SAM(X, Y) = cos 1 X, Y ( X Y ) (17) 207 Shruti

Variation of Window size The proposed pansharpening technique has been tested by implementing it using different window sizes of mean and median filters. The value of window size of mean filter, λ Ω has been changed from 35 to 120, at the intervals of 5. When λ Ω is changed, different results are observed. The results for different values of λ Ω for image Dataset 1 have been shown in Figure 7. (a) image MS (b) image Pan (c) λ Ω=35 (d) λ Ω=65 (e) λ Ω=100 (f) λ Ω=120 Fig. 7. Results of proposed technique using different Window sizes of Mean filter The visual analysis shows that the results become better as we increase the filter size. This is because when λ Ω is increased, the base layer will contain fewer details and more background information and the details layer will contain more details. The technique used fusion of base layer effectively fuses the background information and that used for fusion of detail layer fuses edge information effectively. Thus, the overall quality of fusion is enhanced by increasing window size of mean filter. The variations in results are more noticeable up to filter size 100 as the results using λ Ω=100 and λ Ω =120 seems to be almost same. The quantitative analysis of the results has been done using the three quality parameters, namely Spectral Correlation, Spatial Correlation and UIQI. The computation time has also been considered. The values of parameters and computational time for image Dataset 1 has been shown in Table I. Table I. Performance analysis of proposed technique using different Window sizes of Mean filter Window size of mean filter Spectral CC Spatial CC UIQI Computation time (in seconds) 35 0.7696 0.9342 0.2597 2.6290 65 0.8690 0.9176 0.5378 2.6555 100 0.9490 0.8972 0.7436 2.7585 120 0.9515 0.8894 0.7478 2.8330 Table I shows that the value of Spectral CC and UIQI is increasing and the value of Spatial CC is slightly decreasing as we increase λ Ω. The computation time also increases with increase in λ Ω. It can also be observed that the increase in values of Spectral CC and UIQI with increase in window size has become sluggish for λ Ω > 100. Thus, λ Ω=100 is the optimum window size of mean filter which provides good trade-off between image quality and computation time. 208 Shruti

When the window size of mean filter, λ ζ is varied from 3to 6 at an interval of 1, no noticeable change in results is observed. If λ ζ is increased beyond 6, it will blur the edges. So, the value of λ ζ=3 has been used. Other Methods of Comparison platform The results of the proposed pansharpening technique are analyzed and compared with results of existing pansharpening techniques. The existing techniques used for comparison are IHS based pansharpening, Wavelet based image fusion and Saliency detection based image fusion. The existing saliency detection based technique is a recently proposed technique, that outperforms many other pansharpening techniques like pyramid based fusion techniques (FSD and Grad), multiresolution transform based techniques (MSVD) and multiscale decomposition based techniques (CBF and ADF) [25]. Thus, the comparison of proposed technique with saliency detection based image fusion technique will provide adequate quality analysis. Simulation results and Analysis The proposed pansharpening algorithm has been implemented using the datasets consisting of MS and pan images. The window size of mean and median filters used is 100 and 3 respectively. In this paper, the results have been shown using two datasets. The proposed technique has been compared with existing techniques and the qualitative and quantitative analysis of results has been shown. Qualitative analysis The results of pansharpening for two image datasets, Dataset 1 and Dataset 2 have been shown in Figures 9-10 respectively. The simulation results of datasets 1 in Figure 8 shows that the IHS method shows fine details, thus increasing the spatial resolution but the color information has been altered, which lowers the spectral quality of image. The Wavelet based fusion results show good spectral quality, but a large amount of spatial distortion is present, as a very few details are visible. The results produced by saliency detection based method are visually not good as it produces a very low contrast pansharpened image. The proposed method consists of good details as well as color information, thus enhancing spatial as well as spectral resolution. (a) image MS (b) image Pan (c) IHS (d) Wavele t (e) y Salienc (f) d Propose Fig. 8. Results of pansharpening of MS image for image dataset1. In figure 9, the MS image used has very low spatial resolution, but the results obtained by using proposed method show good quality image with visible detail information and the color information of MS image has also been preserved. The results using IHS method show color alteration while Wavelet method produces the results with very poor spatial information and the results of saliency based method are again not satisfactory for visual inspection. Thus it can be inferred that the proposed pansharpening algorithm produces results that are visually more informative than existing techniques, for low as well as very low spatial resolution MS images. 209 Shruti

(a) MS image (b) Pan image (c) IHS (d) Wavelet (e) Saliency (f) Proposed Fig. 9. Results of pansharpening of MS image for image dataset 2. Quantitative analysis The results of proposed and existing pansharpening techniques are analyzed quantitatively to get clearer and accurate information about the quality of pansharpened images as the visual analysis doesn t provide complete information about the image quality. This is done with the help of several image quality metrics used in analysis of quality of image fusion. The results of two datasets have been analyzed using these parameters and the values of parameters have been shown. Table II shows the values of image quality metrics for Dataset 1. It is evident from the table that the value of spectral correlation coefficient is largest in proposed method and the spatial correlation coefficient is also satisfactory. This shows that the spatial resolution of MS image has increased considerably while preserving the spectral resolution. The values of Entropy and UIQI are also largest which represents high quality image and lowest value of SAM represents closeness of pansharpened image to reference image. IHS based pansharpening also gives good spatial correlation but the spectral correlation is low in this case which means that the spectral information is lost while pansharpening of MS images. The best value of SPD have been obtained using Wavelet based image fusion, which means it gives good spectral quality but the value of spatial correlation is very low which means it produces the fused image with very poor spatial quality. So, the objective of pansharpening is not met using Wavelet based image fusion. The saliency detection based image fusion technique results in high spatial correlation but the value of UIQI is very low. Thus, the image quality is inadequate for use in remote sensing applications. The variations in values of evaluation parameters for Dataset 1 have been shown using bar graphs in Figure 10. The analysis shows that the proposed pansharpening technique gives better results than the existing pansharpening techniques. Table II. Performance analysis of pansharpening techniques for image dataset1. Spectral CC Spatial CC Entropy UIQI SPD SAM IHS based technique 0.8531 0.9153 7.8617 0.7192 0.2131 0.2754 Wavelet based technique 0.9481 0.7180 7.8670 0.7396 0.0550 0.1616 Saliency detection based technique 0.9236 0.9319 5.3531 0.2592 0.4257 0.1590 Proposed technique 0.9490 0.8972 7.9974 0.7436 0.1886 0.1563 210 Shruti

Fig. 10. Evaluation parameters for results of existing and proposed pansharpening techniques for image dataset1. Table III shows the values of image metrics for Dataset 2. The values of Spectral Correlation, UIQI and SAM are highest for proposed algorithm and the values of Entropy and spatial correlation are also high, which shows that the proposed algorithm produces a good quality image, with the spatial resolution increased by considerable amount and preserving the spectral resolution of MS image. The IHS method gives high spatial correlation but the value of spectral resolution is very low, which means that the spectral information has been lost. The Wavelet based method gives high spectral correlation and SPD which means the spectral information of MS image is preserved but the value of spatial correlation is very low. Saliency detection based fusion technique is providing the image with high spectral and spatial correlation, the value of UIQI is very low. 211 Shruti

Thus, it can be inferred that the proposed pansharpening technique performs better than existing techniques even for very low spatial resolution MS image. Table III. performance analysis of pansharpening techniques for image dataset 2. Spectral CC Spatial CC Entropy UIQI SPD SAM IHS based technique 0.6555 0.9237 7.1446 0.6022 0.1998 0.3567 Wavelet based technique 0.9334 0.7253 7.6495 0.6725 0.0347 0.1610 Saliency detection based technique 0.9408 0.8634 5..8391 0.1848 0.3376 0.1581 Proposed technique 0.9410 0.8587 7.5786 0.7352 0.2657 0.1476 CONCLUSION Different pansharpening techniques have been discussed along with their advantages and limitations and a new pansharpening technique has been proposed using IHS and Wavelet Transform based pansharpening techniques. The proposed pansharpening technique is an IHS based multiscale image decomposition pansharpening technique that uses the concept of visual saliency detection and Wavelet Transform based image fusion. The technique has been implemented using several image datasets containing both low and very low spectral resolution MS images and the results have been analyzed qualitatively and quantitatively. The results reveal that the proposed technique performs well in case of low as well as very low resolution MS images. This is because the multiscale decomposition fusion method performs fusion of large scale and small scale information individually, thus preserving both large scale and small scale information of MS and pan images. The IHS method provides high spatial resolution and Wavelet Transform provides high spectral resolution and the use of saliency detection helps in extracting the significant details and edge information from MS and pan images. Thus the proposed pansharpening technique provides high spatial resolution pansharpened images while preserving the spectral resolution of MS images. The results have also been compared with existing pansharpening techniques and the comparison shows that the proposed technique provides better results. The proposed technique can also be used to fuse images from other image modalities like medical images, visible and IR images etc. REFERENCES [1] H. B. Kekre, Comparison of Image Fusion Techniques in RGB & Kekre s LUV Color Space, no. Ablaze, pp. 114 120, 2015. [2] U. Patil and U. Mudengudi, Image fusion using hierarchical PCA., 2011 Int. Conf. Image Inf. Process., no. Iciip, pp. 1 6, 2011. [3] M. Dalla Mura, S. Prasad, F. Pacifici, P. Gamba, J. Chanussot, and J. A. Benediktsson, Challenges and Opportunities of Multimodality and Data Fusion in Remote Sensing, Proc. IEEE, vol. 103, no. 9, pp. 1585 1601, 2015. [4] G. Vivone, L. Alparone, J. Chanussot, M. D. Mura, A. Garzelli, S. Member, G. A. Licciardi, R. Restaino, and L. Wald, Pansharpening Algorithms, IEEE Trans. Geosci. Remote Sens., vol. 53, no. 5, pp. 2565 2586, 2014. [5] H. R. Shahdoosti and H. Ghassemian, Combining the spectral PCA and spatial PCA fusion methods by an optimal filter, Inf. Fusion, vol. 27, pp. 150 160, 2016. [6] Y. Leung, J. Liu, and J. Zhang, An Improved Adaptive Intensity Hue Saturation Method for the Fusion of Remote Sensing Images, vol. 11, no. 5, pp. 985 989, 2014. [7] N. H. Kaplan and I. Erer, Bilateral filtering-based enhanced pansharpening of multispectral satellite images, IEEE Geosci. Remote Sens. Lett., vol. 11, no. 11, pp. 1941 1945, 2014. [8] L. Dong, Q. Yang, H. Wu, H. Xiao, and M. Xu, High quality multi-spectral and panchromatic image fusion technologies based on Curvelet transform, Neurocomputing, vol. 159, pp. 268 274, 2015. [9] TOTAL VARIATION BASED HYPERSPECTRAL FEATURE EXTRACTION Behnood Rasti, Johannes R. Sveinsson and Magnus O. Ulfarsson Faculty of Electrical and Computer Engineering Hjardarhagi 2-6. IS-107 Reykjavik, Iceland, pp. 4644 4647, 2014. [10] H. Chu and W. Zhu, Fusion of IKONOS Satellite Imagery Using IHS Transform and Local Variation, vol. 5, no. 4, pp. 653 657, 2008. 212 Shruti

[11] M. Choi and S. Member, A New Intensity-Hue-Saturation Fusion Approach to Image Fusion With a Tradeoff Parameter, vol. 44, no. 6, pp. 1672 1682, 2006. [12] M. González-audícana, X. Otazu, O. Fors, and J. Alvarez-mozos, A Low Computational-Cost Method to Fuse IKONOS Images Using the Spectral Response Function of Its Sensors, vol. 44, no. 6, pp. 1683 1691, 2006. [13] A. Garzelli and F. Nencini, Fusion of Panchromatic and Multispectral Images by Genetic Algorithms, pp. 3793 3796, 2006. [14] S. Rahmani, M. Strait, D. Merkurjev, M. Moeller, and T. Wittman, An Adaptive IHS Pan-sharpening Method, no. March, 2010. [15] S. Chen, R. Zhang, H. Su, S. Member, J. Tian, and J. Xia, SAR and Multispectral Image Fusion Using Generalized IHS Transform Based on à Trous Wavelet and EMD Decompositions, vol. 10, no. 3, pp. 737 745, 2010. [16] A. M. Ragheb, H. Osman, A. M. Abbas, S. M. Elkaffas, T. A. El-Tobely, S. Khamis, M. E. Elhalawany, M. E. Nasr, M. I. Dessouky, W. Al-Nuaimy, and F. E. Abd El-Samie, Simultaneous Fusion and Denoising of Panchromatic and Multispectral Satellite Images, Sens. Imaging An Int. J., vol. 13, no. 3 4, pp. 119 141, 2012. [17] Y. Yusof and O. O. Khalifa, Wavelet Transform, no. May. pp. 14 17, 2007. [18] R. Singh and A. Khare, Fusion of multimodal medical images using Daubechies complex wavelet transform A multiresolution approach, vol. 19, pp. 49 60, 2014. [19] Q. Guo and S. Liu, Performance analysis of multi-spectral and panchromatic image fusion techniques based on two wavelet discrete approaches, Optik (Stuttg)., vol. 122, no. 9, pp. 811 819, 2011. [20] A. Ellmauthaler, S. Member, C. L. Pagliari, S. Member, E. A. B. Silva, and S. Member, Multiscale Image Fusion Using the Undecimated Wavelet Transform With Spectral Factorization and Nonorthogonal Filter Banks, vol. 22, no. 3, 2013. [21] N. Jorge, Multiresolution-Based Image Fusion with Additive Wavelet Decomposition, vol. 37, no. 3, pp. 1204 1211, 1999. [22] M. Moeller, T. Wittman, and A. L. Bertozzi, Variational Wavelet Pan-Sharpening, pp. 1 9, 2008. [23] X. Otazu, M. González-audícana, O. Fors, and J. Núñez, Introduction of Sensor Spectral Response Into Image Fusion Methods. Application to Wavelet-Based Methods, vol. 43, no. 10, pp. 2376 2385, 2005. [24] Q. Duan, T. Akram, P. Duan, and X. Wang, Optik Visual saliency detection using information contents weighting, vol. 127, pp. 7418 7430, 2016. [25] D. P. Bavirisetti and R. Dhuli, Two-scale image fusion of visible and infrared images using saliency detection, Infrared Phys. Technol., vol. 76, pp. 52 64, 2016. [26] Y. Liu, S. Liu, and Z. Wang, A general framework for image fusion based on multi-scale transform and sparse representation, Inf. Fusion, vol. 24, pp. 147 164, 2015. [27] Q. Wei, S. Member, N. Dobigeon, and S. Member, Fast Fusion of Multi-Band Images Based on Solving a Sylvester Equation, vol. 24, no. 11, pp. 4109 4121, 2015. [28] F. Fang, G. Zhang, F. Li, and C. Shen, Neurocomputing Framelet based pan-sharpening via a variational method, vol. 129, pp. 362 377, 2014. [29] P. Jagalingam and A. V. Hegde, A Review of Quality Metrics for Fused Image, Aquat. Procedia, vol. 4, no. Icwrcoe, pp. 133 142, 2015. [30] Z. Wang and A. Bovik, A universal image quality index, IEEE Signal Process. Lett., vol. 9, no. 3, pp. 81 84, 2002. [31] X. Zhang and P. Li, International Journal of Applied Earth Observation and Geoinformation Lithological mapping from hyperspectral data by improved use of spectral angle mapper, vol. 31, pp. 95 109, 2014. 213 Shruti