QUALITY ASSESSMENT OF IMAGE FUSION METHODS IN TRANSFORM DOMAIN

Similar documents
PERFORMANCE ANALYSIS OF CONTOURLET-BASED HYPERSPECTRAL IMAGE FUSION METHODS

PERFORMANCE COMPARISON OF DIFFERENT MULTI-RESOLUTION TRANSFORMS FOR IMAGE FUSION

Novel Hybrid Multi Focus Image Fusion Based on Focused Area Detection

Image Fusion Using Double Density Discrete Wavelet Transform

Region Based Image Fusion Using SVM

IMAGE SUPER RESOLUTION USING NON SUB-SAMPLE CONTOURLET TRANSFORM WITH LOCAL TERNARY PATTERN

COMPARATIVE STUDY OF IMAGE FUSION TECHNIQUES IN SPATIAL AND TRANSFORM DOMAIN

An Effective Multi-Focus Medical Image Fusion Using Dual Tree Compactly Supported Shear-let Transform Based on Local Energy Means

IMAGE ENHANCEMENT USING NONSUBSAMPLED CONTOURLET TRANSFORM

A Novel NSCT Based Medical Image Fusion Technique

Multi-focus Image Fusion Using Stationary Wavelet Transform (SWT) with Principal Component Analysis (PCA)

Fusion of Multimodality Medical Images Using Combined Activity Level Measurement and Contourlet Transform

Performance Optimization of Image Fusion using Meta Heuristic Genetic Algorithm

Curvelet Transform with Adaptive Tiling

Image denoising using curvelet transform: an approach for edge preservation

Contourlet-Based Multispectral Image Fusion Using Free Search Differential Evolution

A Toolbox for Teaching Image Fusion in Matlab

A Novel Framework for Image Fusion Based on Improved Pal-King s Algorithm and Transform Domain Techniques

University of Bristol - Explore Bristol Research. Peer reviewed version. Link to published version (if available): /ICIP.2005.

Image De-noising using Contoulets (A Comparative Study with Wavelets)

Image Fusion Based on Wavelet and Curvelet Transform

Change Detection in Remotely Sensed Images Based on Image Fusion and Fuzzy Clustering

International Journal of Engineering Research-Online A Peer Reviewed International Journal Articles available online

Design of direction oriented filters using McClellan Transform for edge detection

NSCT domain image fusion, denoising & K-means clustering for SAR image change detection

Performance Evaluation of Fusion of Infrared and Visible Images

IMAGE FUSION PARAMETER ESTIMATION AND COMPARISON BETWEEN SVD AND DWT TECHNIQUE

Multi Focus Image Fusion Using Joint Sparse Representation

Visible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness

Contourlets: Construction and Properties

Anisotropic representations for superresolution of hyperspectral data

Robust Classification of MR Brain Images Based on Multiscale Geometric Analysis

A Novel Image Classification Model Based on Contourlet Transform and Dynamic Fuzzy Graph Cuts

Surface Defect Edge Detection Based on Contourlet Transformation

Second Generation Curvelet Transforms Vs Wavelet transforms and Canny Edge Detector for Edge Detection from WorldView-2 data

Comparative Evaluation of DWT and DT-CWT for Image Fusion and De-noising

International Journal of Advanced Engineering Technology E-ISSN

Fusion of Infrared and Visible Light Images Based on Region Segmentation

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction

Wavelet-based Contourlet Coding Using an SPIHT-like Algorithm

Improved Multi-Focus Image Fusion

Multi-Focus Medical Image Fusion using Tetrolet Transform based on Global Thresholding Approach

A Novel Pansharpening Algorithm for WorldView-2 Satellite Images

Compressive sensing image-fusion algorithm in wireless sensor networks based on blended basis functions

Contourlet Based Image Recovery and De-noising Through Wireless Fading Channels

Beyond Wavelets: Multiscale Geometric Representations

DENOISING OF COMPUTER TOMOGRAPHY IMAGES USING CURVELET TRANSFORM

MEDICAL IMAGE FUSION BASED ON TYPE-2 FUZZY LOGIC IN NSCT

MULTI-FOCUS IMAGE FUSION USING GUIDED FILTERING

An Adaptive Multi-Focus Medical Image Fusion using Cross Bilateral Filter Based on Mahalanobis Distance Measure

Spectral Classification

Domain. Faculty of. Abstract. is desirable to fuse. the for. algorithms based popular. The key. combination, the. A prominent. the

7.1 INTRODUCTION Wavelet Transform is a popular multiresolution analysis tool in image processing and

Optimal Decomposition Level of Discrete, Stationary and Dual Tree Complex Wavelet Transform for Pixel based Fusion of Multi-focused Images

Skin Infection Recognition using Curvelet

Denoising and Edge Detection Using Sobelmethod

Content Based Image Retrieval Using Curvelet Transform

Comparative Study of Dual-Tree Complex Wavelet Transform and Double Density Complex Wavelet Transform for Image Denoising Using Wavelet-Domain

Dual Tree Complex Wavelet Transform and Robust Organizing Feature Map in Medical Image Fusion Technique

Beyond Wavelets: Directional Multiresolution Image Representation

A Study on Multiresolution based Image Fusion Rules using Intuitionistic Fuzzy Sets

The method of Compression of High-Dynamic-Range Infrared Images using image aggregation algorithms

Color Image Enhancement Based on Contourlet Transform Coefficients

Image Enhancement Techniques for Fingerprint Identification

Multimodal Medical Image Fusion Based on Lifting Wavelet Transform and Neuro Fuzzy

NSCT BASED LOCAL ENHANCEMENT FOR ACTIVE CONTOUR BASED IMAGE SEGMENTATION APPLICATION

A MAXIMUM NOISE FRACTION TRANSFORM BASED ON A SENSOR NOISE MODEL FOR HYPERSPECTRAL DATA. Naoto Yokoya 1 and Akira Iwasaki 2

Copyright 2005 Center for Imaging Science Rochester Institute of Technology Rochester, NY

Data Fusion. Merging data from multiple sources to optimize data or create value added data

PRINCIPAL COMPONENT ANALYSIS IMAGE DENOISING USING LOCAL PIXEL GROUPING

Sliced Ridgelet Transform for Image Denoising

IMAGE DIGITIZATION BY WAVELET COEFFICIENT WITH HISTOGRAM SHAPING AND SPECIFICATION

CHAPTER 6 ENHANCEMENT USING HYPERBOLIC TANGENT DIRECTIONAL FILTER BASED CONTOURLET

Fusion of Visible and Infrared Images Based on Spiking Cortical Model in Nonsubsampled Contourlet Transform Domain

A COMPARISON OF WAVELET-BASED AND RIDGELET- BASED TEXTURE CLASSIFICATION OF TISSUES IN COMPUTED TOMOGRAPHY

Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest.

Image Resolution Improvement By Using DWT & SWT Transform

Wavelet for Image Fusion

A CURVELET-BASED DISTANCE MEASURE FOR SEISMIC IMAGES. Yazeed Alaudah and Ghassan AlRegib

Implementation & comparative study of different fusion techniques (WAVELET, IHS, PCA)

Medical Image Fusion Using Discrete Wavelet Transform

INTERNATIONAL JOURNAL OF RESEARCH SCIENCE & MANAGEMENT

Text Region Segmentation From Heterogeneous Images

A Study On Asphyxiating the Drawbacks of Wavelet Transform by Using Curvelet Transform

Int. J. Pharm. Sci. Rev. Res., 34(2), September October 2015; Article No. 16, Pages: 93-97

PET AND MRI BRAIN IMAGE FUSION USING REDUNDANT WAVELET TRANSFORM

Texture Based Image Segmentation and analysis of medical image

SIGNAL DECOMPOSITION METHODS FOR REDUCING DRAWBACKS OF THE DWT

Contourlets and Sparse Image Expansions

Survey on Multi-Focus Image Fusion Algorithms

Image denoising in the wavelet domain using Improved Neigh-shrink

Ripplet: a New Transform for Feature Extraction and Image Representation

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

WAVELET BASED NONLINEAR SEPARATION OF IMAGES. Mariana S. C. Almeida and Luís B. Almeida

A NOVEL TECHNIQUE TO REDUCE THE IMAGE NOISE USING IMAGE FUSION A.Aushik 1, Mrs.S.Sridevi 2

IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY. Scientific Journal Impact Factor: (ISRA), Impact Factor: 2.

Comparative Analysis of Image Compression Using Wavelet and Ridgelet Transform

Module 8: Video Coding Basics Lecture 42: Sub-band coding, Second generation coding, 3D coding. The Lecture Contains: Performance Measures

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 14, NO. 12, DECEMBER Minh N. Do and Martin Vetterli, Fellow, IEEE

SEG/New Orleans 2006 Annual Meeting

Transcription:

QUALITY ASSESSMENT OF IMAGE FUSION METHODS IN TRANSFORM DOMAIN Yoonsuk Choi*, ErshadSharifahmadian, Shahram Latifi Dept. of Electrical and Computer Engineering, University of Nevada, Las Vegas 4505 Maryland Parkway, Las Vegas, NV 89154-4026 ABSTRACT In this paper, we analyze and compare the performance of fusion methods based on four different transforms: i) wavelet transform, ii) curvelet transform, iii) contourlet transform and iv) nonsubsampled contourlet transform. Fusion framework and scheme are explained in detail, and two different sets of images are used in our experiments. Furthermore, eight different performancemetrics are adopted to comparatively analyze the fusion results. The comparison results show that the nonsubsampled contourlet transform method performs better than the other three methods, both spatially and spectrally. We also observed from additional experiments that the decomposition level of 3 offered the best fusion performance, anddecomposition levels beyond level-3 did not significantly improve the fusion results. KEYWORDS Curvelet, Contourlet, Decomposition, Nonsubsampled contourlet, Wavelet 1. INTRODUCTION Image fusion techniques have been employed in various applications, such as concealed weapon detection, remote sensing, and medical imaging. Combining two or more images of the same scene usually produces a better application-wise visible image [1]. The fusion of different images can reduce the uncertainty related to a single image. Furthermore, image fusion should include techniques that can implement the geometric alignment of several images acquired by different sensors. Such techniques are called a multi-sensor image fusion [2]. The output fused images are usually efficiently used in many military and security applications, such as target detection, object tracking, weapon detection, night vision, etc. There are many different methods in image fusion process. The Brovey Transform (BT), Intensity Hue Saturation (IHS) and Principal Component Analysis (PCA) [3] provide the basis for many commonly used image fusion techniques. Intensity-hue-saturation method is the oldest method used in image fusion. It performs in RGB domain. The RGB input image is then transformed to IHS domain. Inverse IHS transform is used to convert the image to RGB domain [4]. Brovey transform is based on the chromacity transform. In the first step, the RGB input image is normalized and multiplied by the other image. The resultant image is then added to the intensity component of the RGB input image [5]. Principal component analysis-based image fusion methods are similar to IHS methods, without any limitation in the number of fused bands. Some of these techniques improve the spatial resolution while distorting the original chromaticity of the input images, which is a major drawback. Recently, great interest has arisen on new transform techniques that utilize the multi-resolution analysis, such as wavelet transform (WT). The multi-resolution decomposition schemes decompose input images into different scales or levels of frequencies. Wavelet based image DOI : 10.5121/ijit.2014.3102 7

fusion techniques are implemented by replacing the detail components (high frequency coefficients) from a colored input image with the detail components from another gray-scale input image. However, the wavelet based fusion techniques are not optimal in capturing twodimensional singularities from the input images. The two-dimensional wavelets, which are obtained by a tensor-product of one-dimensional wavelets, are good in detecting the discontinuities at edge points. However, the 2-D wavelets exhibit limited capabilities in detecting the smoothness along the contours [6]. Moreover, the singularity in some objects is due to the discontinuity points located at the edges. These points are located along smooth curves rendering smooth boundaries of objects. The discrete wavelet transform (DWT), stationary wavelet transform (SWT), and dual-tree complex wavelet transform (DTCWT) cannot capture curves and edges of images well. More reasonable bases should contain geometrical structure information when they are used to represent images. Candes and Donoho proposed the curvelet transform (CVT) with the idea of representing a curve as a superposition of bases of various lengths and widths obeying the scaling law width length2 [7]-[9]. The CVT is referred to as the true 2-D transform. Different from the CVT which is first developed in continuous domain and then is discretized for sampled data, contourlet transform (CT), introduced by Do and Vetterli, starts with a discretedomain construction [10]. This transform is more suitable for constructing multi-resolution and multi-directional expansions using non-separable Pyramid Directional Filter Banks (PDFB) with small redundancy factor [1].As illustrated in Figure 1, the image fusion techniques can be organized into three main categories. Primitive fusion schemes, such as averaging, weighted averaging and global Principal-Component-Analysis (PCA), are performed solely in the spatial domain. Despite the easy implementation, these methods pay the expenses of reducing the contrast and distorting the spectral characteristics [11]. To solve these problems, more sophisticated fusions in the transform domain employ properties like multi-resolution decomposition. It decomposes images at different scale to several components, which account for important salient features of images [11]. Therefore, it enables a better performance than those performed in the spatial domain. The methods in the third category utilize statistical ways, such as Bayesian optimization to obtain the fused image; however, it suffers from a significant increase of computational complexity [12], [13]. Figure 1.Categories of image fusion methods In this paper, we mainly focus on the image fusion techniques in transform domain as spatial domain methods are well-studied and statistical domain methods suffer from a significant increase of computational complexity. In Section 2, wavelet transform, curvelet transform, contourlet transform and nonsubsampled contourlet transform are discussed in terms of principle, 8

advantages and drawbacks. In Section 3, we explain the fusion framework, scheme and quality assessment metrics that are employed in our study. In Section 4, experimental study and analysis are discussed, and the conclusion is provided in Section 5. 2. TRANSFORM DOMAIN 2.1. Wavelet Transform Since there is no subsampling process and the size of the filters increases in each scale, the stationary wavelet (SWT) is computationally inefficient, especially in multiple dimensions [14]. In addition, the SWT only provides details in three directions for each scale. To overcome these problems, the dual-tree complex wavelet (DTCWT) is proposed, which is approximately shiftinvariant, directionally selective, and computationally efficient. Dual-tree of wavelet filters is used to obtain the real and imaginary parts of complex wavelet coefficients. A simple delay of one sample between the filters of the first level in each tree is conducted, and then odd-length and even-length linear-phase filters are used alternately. The filters in the two trees are just timereverse of each other. Figure 2 shows the practical implementation of the DTCWT on a 1-D signal. h0(n), h1(n) denote the low-pass/high-pass filter pair for the upper filter bank, and g0(n), g1(n) denote the low-pass/high-pass filter pair for the lower filter bank. The DTCWT satisfies the property of approximate shift-invariance and directional selectivity in multiple dimensions. More details of the DTCWT are available in [14]. Figure 2.Practical implementation of the DTCWT on a 1-D signal. Analysis filter bank for the DTCWT (left). Synthesis filter bank for the DTCWT (right) [14] 2.2. Curvelet Transform The DWT, SWT, and DTCWT cannot capture curves and edges of images well. More reasonable bases should contain geometrical structure information when they are used to represent images. Candès and Donoho proposed the curvelet transform (CVT) with the idea of representing a curve as a superposition of bases of various lengths and widths obeying the scaling law, width length2. Two examples of the CVT bases are shown in Figure 3(a). Figure 3(b) presents two examples of wavelet bases. In Figure 3, it can be seen that the CVT is more suitable for the analysis of image edges, such as curve and line characteristics, than the wavelet. The CVT is referred to as the true 2-D transform. The discrete version implemented in this research uses a wrapping transform. The flowchart of the second generation of the curvelet transform is presented in Figure 4. Firstly, the 2-D FFT is applied to the source image to obtain Fourier samples. Next, a discrete localizing window smoothly localizes the Fourier transform near the sheared wedges obeying the parabolic scaling. Then, the wrapping transformation is applied to re-index the data. Finally, the inverse 2D FFT is used to obtain the discrete CVT coefficients. More details can be found in [9]. 9

(a) (b) Figure 3.Comparison between curvelet bases and wavelet bases. (a) Two bases of curvelet. (b) Two bases of wavelet [9] Figure 4.Flowchart of the second generation of curvelet transform via wrapping 2.3. Contourlet Transform The wavelet transform is good at isolating the discontinuities at object edges, but cannot detect the smoothness along the edges. Moreover, it can capture limited directional information. The contourlet transform can effectively overcome the disadvantages of wavelet; contourlet transform is a multi-scale and multi-direction framework of discrete image. In this transform, the multiscale analysis and the multi-direction analysis are separated in a serial way. The Laplacian pyramid (LP) [15] is first used to capture the point discontinuities, then followed by a directional filter bank (DFB) [16] to link point discontinuities into linear structures. The overall result is an image expansion using basic elements like contour segments. The framework and filter bank of contourlet transform are shown in Figure 5.First, multi scale decomposition by the Laplacian pyramid, and then a directional filter bank is applied to each band pass channel. Figure 5. The contourlet transform framework (left) and the contourlet filter bank (right) Contourlet expansion of images consists of basis images oriented at various directions in multiple scales with flexible aspect ratio. In addition to retaining the multi-scale and time-frequency localization properties of wavelets, the contourlet transform offer high degree of directionality. Contourlet transform adopts nonseparable basis functions, which makes it capable of capturing 10

the geometrical smoothness of the contour along any possible direction. Compared with traditional image expansions, contourlet can capture 2-D geometrical structure in natural images much more efficiently [17].Furthermore, for image enhancement, one needs to improve the visual quality of an image with minimal image distortion. Wavelet-based methods present some limitations because they are not well adapted to the detection of highly anisotropic elements such as alignments in an image. Contourlet transform has better performance in representing the image salient features such as edges, lines, curves and contours than wavelet transform because of its anisotropy and directionality. Therefore, it is well-suited for multi-scale edge based image enhancement.to highlight the difference between the wavelet and contourlet transform, Figure 6 shows a few wavelet and contourlet basis images. It is possible to see that contourlets offer a much richer set of directions and shapes, and thus they are more effective in capturing smooth contours and geometric structures in images. Figure 6. Comparison between actual 2-D wavelets (left) and contourlets (right) [10] 2.4. Nonsubsampled Contourlet Transform The contourlet transform achieves better expression than discrete wavelettransform, especially for edges and contours. However,dueto the downsampling and upsampling, the contourlet transform islack of shift-invariance and results in ringing artifacts. The shift-invariance is required in image analysisapplications, such as edge detection, contourcharacterization, image fusion and so on. Therefore,nonsubsampled contourlet transform (NSCT) was proposed[18]based on nonsubsampled pyramid decomposition andnonsubsampled filter bank (NSFB). In NSCT, themultiscale analysis and the multidirection analysis arealso separated, but both of them are shift-invariant. First, thenonsubsampled pyramid (NSP) is used to obtain a multiscaledecomposition by using two-channelnonsubsampled 2-D filter bands. Second, thenonsubsampled directional filter bank is used to splitband pass sub-bands in each scale with differentdirections. Figure 7 shows two-level decomposition using a combination of a NSP and NSFB. Since there isno downsampling in pyramid decomposition, low-passsub-band has no frequency aliasing.the bandwidthof low-pass filter is larger than π/2. Hence, the nonsubsampled contourlet transformoffers better frequency characteristics than the original contourlet transform. 11

(a) (b) Figure 7. Nonsubsampled contourlet transform. (a) NSFB structure that implements the NSCT. (b) Idealized frequency partitioning obtained with the proposed structure [19] 3. IMAGE FUSION In our study, image fusion is performed for two different sets of imagery: i) Multispectral and Panchromatic Images and ii) Hyperspectral and Panchromatic Images. For each group of imagery, two source images are fused together based on the fusion methods in transform domain; i) Wavelet transform, ii) Curvelet transform, iii) Contourlet transform and iv) Nonsubsampled contourlet transform. More details about the source images are elaborated in Section 4. 3.1. Fusion Framework The fusion framework used in the experiments is shown in Figure 8. First, source images are decomposed into multiscale and multidirectional components, and these components are fused together based on a certain fusion scheme. Next, inverse transform is performed in order to obtain a final fused image. The framework is generally the same for each transform; however, the main difference is how each transform decomposes the source images into multiscale or multidirectional components. Therefore, we can perform experimental study under the same conditions, and we can investigate the effectiveness of each transform method properly. Wavelet transform method decomposes the source images into multiscale components. On the other hand, the other three transform methods decompose the source images into both multiscale and multidirectional components. Figure 8. Fusion framework 3.2. Fusion Scheme As mentioned above, the general framework is identical for each of the transform methods (WT, CVT, CT, NSCT) that are employed in our study. However, each transform method decomposes the source images in different ways to obtain necessary frequency coefficient sets. After the 12

decomposition step, frequency components are ready to be utilized in the next step, which is the fusion process. All of the transform methods follow the same fusion scheme and rule that are described as follows: 1. The source images are decomposed according to each transform in order to obtain multiscale or multidirectional frequency coefficients. In our experiments, decomposition level of 3 was used since the level beyond 3 did not give significant improvement. 2. The maximum frequency fusion rule is used for the fusion of the frequency coefficients. In other words, higher frequency coefficients are selected from each set, and these selected frequency coefficients are used as coefficients of the fused image. 3. The inverse transform step is performed to obtain the final fusion image. The fused coefficients are subject to the inverse transform. 3.3. Quality Assessment Metrics We need specific analysis criteria in order to measure and evaluate the performance of the experimental results. Qualitative and quantitative inspections are two major means to evaluate the performance of distinct fusion schemes. However, qualitative approaches may contain subjective factors and can be influenced by personal preferences or eyesight. Due to these problems, quantitative approaches are often required and more desired to evaluate the experimental results. For quantitative evaluation, a variety of fusion quality assessment methods have been introduced by different researchers. However, it is often not easy to obtain convincing evaluations based on a sole criterion, especially in comparative analysis of fusion results. Therefore, in the comparative study, more than one criterion are adopted to ensure that the analysis is accurate. In this paper, we employ various quality assessment metrics, and they can be classified into two categories: i) spectral analysis and ii) spatial analysis. Correlation coefficient (CC) [20], relative average spectral error (RASE) [21], spectral angle mapper (SAM) [22] and spectral information divergence (SID) [23] are used for spectral analysis. On the other hand, for spatial analysis, we employ entropy (E) [24, 25], universal image quality index (UIQI) [26], signal-to-noise ratio (SNR) and average gradient (AG) [27]. 4. EXPERIMENTAL STUDY AND ANALYSIS In our experiments, two different data sets are used. The first data set is a multispectral (MS) image and the second data set is a hyperspectral (HS) image. Each data set is explained more in detail in the following subsections 4.1 and 4.2 respectively. Moreover, the following subsections explain how the pre-processing is performed over the source images prior to the image fusion. 4.1. Fusion of Multispectral and Panchromatic Images The first data set was downloaded from [28]. This is a MS image with 2.8m resolution, which was acquired by the commercial satellite IKONOS. It is possible to get a 0.7m panchromatic (PAN) image of the same scene from the same resource; however, we pre-process the original MS image to make a PAN image that can be used as the source image. Moreover, we pre-process the original MS image to make a downgraded MS image which can be used as the second source image. By doing this, we can eliminate the step of co-registering two source images and computational load can be significantly reduced. Furthermore, we can obtain a perfectly coregistered pair of source images; in our case, PAN and MS images. Based on the above discussion, the original multispectral image is shifted, in both horizontal and vertical directions, to produce a shifted multispectral image, and then the sequence was convolved 13

with Gaussian smooth filter point-spread function (PSF) of size 3 3 with variance equal to 0.5. Then, it is downsampled in both horizontal and vertical directions by factor of two. Lastly, zeromean Gaussian noise was added to the sequence. The final result is a synthesized multispectral image that can be used in our experiment as the first source image. In order to get the second source image, panchromatic image, the original multispectral image is spectrally integrated over the entire spectral range. The final result is a synthesized panchromatic image that can be used as the second source image [29]. The original MS image is used as a reference image, and the fusion results are compared to the reference MS image to quantitatively analyze the performance. Two source images (PAN and MS) and the original MS image as a reference are shown in Figure 9. The fusion results of (a) WT, (b) CVT, (c) CT and (d) NSCT are shown in Figure 10 respectively. (a) (b) (c) Figure 9. The original MS image and two synthesized source images. (a) Original MS image. (b) Synthesized PAN source image. (c) Synthesized MS source image (a) (b) (c) (d) Figure 10. Fusion results. (a) WT. (b) CVT. (c) CT. (d) NSCT Table 1. A performance comparison using quality assessment metrics. Spectral Analysis Spatial Analysis Fusion Method CC RASE SAM SID E UIQI SNR AG WT CVT 0.846 0.859 44.853 44.738 0.277 0.268 0.236 0.227 4.578 5.163 0.674 0.683 68.652 68.738 5.267 5.331 14

CT NSCT 0.862 44.682 0.256 0.214 5.327 0.691 68.744 5.562 0.879 44.527 0.245 0.205 5.582 0.699 68.757 5.924 4.2. Fusion of Hyperspectral and Panchromatic Images The second data set is a hyperspectral image from MultiSpec by Purdue University [30]. One more comparative analysis is performed on a pair of hyperspectral and panchromatic images in order to analyze the fusion results with higher reliability. The original HS image is used as a reference image, and the original HS image is synthesized to obtain a pair of perfectly registered HS and PAN images. The pre-processing is performed the same way as the first data set. The original (reference) HS image and two source images (HS and PAN) are shown in Figure 11. In Figure 12, we can see the fusion results of each transform method; (a) WT, (b) CVT, (c) CT and (d) NSCT respectively. A performance comparison is provided in Table 2. (a) (b) (c) Figure 11. The original HS image and two synthesized source images. (a) Original HS image. (b) Synthesized PAN source image. (c) Synthesized HS source image (a) (b) (c) (d) Figure 12. Fusion results. (a) WT. (b) CVT. (c) CT. (d) NSCT Table 2. A performance comparison using quality assessment metrics. Spectral Analysis Spatial Analysis Fusion Method CC RASE SAM SID E UIQI SNR AG WT CVT CT 0.653 0.659 0.662 46.769 46.756 46.741 0.247 0.242 0.236 0.182 0.179 0.166 6.137 6.382 6.624 0.621 0.648 0.653 65.657 66.374 67.498 5.183 5.369 5.636 15

NSCT 0.719 46.628 0.228 0.153 6.836 0.684 68.833 5.847 As we can see from Tables 1 and 2, contourlet-based methods perform better than wavelet and curvelet based methods, both spectrally and spatially. Moreover, if we take a closer look at two different contourlet based methods, we can see that NSCT outperforms CT. The possible reason is because NSCT has many advantages; such as, multi-scalability, localization, multidirectionality; hence, NSCT effectively captures the geometric information of images. Therefore, NSCT is adopted in image fusion in order to better utilize the characteristics of the original images and obtain fusion results with more abundant information. In addition, NSCT is shiftinvariant and the size between sub-band images and original image is the same. As a result, the misregistration on the fusion results is effectively reduced. Table 3. A performance comparison of NSCT results using different decomposition levels (Dataset 1). Decomposition Level Level 2 Level 3 Level 4 Level 5 Spectral Analysis Spatial Analysis CC RASE SAM SID E UIQI SNR AG 0.826 44.545 0.267 0.224 5.263 0.528 68.128 5.813 0.879 0.867 0.858 44.527 44.532 44.549 0.245 0.259 0.269 0.205 0.218 0.231 5.582 5.324 5.295 0.699 0.642 0.624 68.757 68.721 68.710 5.924 5.910 5.886 Table 4. A performance comparison of NSCT results using different decomposition levels (Dataset 2). Decomposition Level Level 2 Level 3 Level 4 Level 5 Spectral Analysis Spatial Analysis CC RASE SAM SID E UIQI SNR AG 0.642 46.831 0.263 0.172 6.679 0.612 67.764 5.713 0.719 0.654 0.641 46.628 47.749 47.762 0.228 0.257 0.266 0.153 0.181 0.189 6.836 6.632 6.548 0.684 0.641 0.623 68.833 67.721 67.673 5.847 5.745 5.637 Tables 3 and 4 show a comparative analysis of the fusion results of NSCT based on different decomposition levels for datasets 1 and 2 respectively. Each column shows the fusion results with different decomposition levels for each of the eight quality assessment metrics. It is noticeable that the decomposition levels beyond level-3 do not necessarily enhance the fusion results. As we can see from the tables, in terms of the spectral quality metrics, the fusion results with decomposition levels beyond 3 are worse than the results with level 3. It is possible to say that the correlation between the spectral bands gets worse as the decomposition level increases; hence, all four spectral quality metrics give bad results. This also means that the spectral information is not preserved well during both decomposition and fusion processes. Similarly, in terms of the spatial quality metrics, decomposition levels beyond 3 provide worse fusion results. In other words, spatial resolution decreases as the decomposition level increases beyond level 3, and the fusion results are affected because the spatial details of the source images are not preserved in decomposition process. Therefore, we can conclude that the decomposition level 3 is enough to obtain satisfying fusion results 5. CONCLUSIONS 16

In this paper, we performed experiments to compare and analyze the fusion results of four different fusion methods in transform domain. We evaluated and analyzed our experimental results using eight different performance quality metrics to ensure a correct comparison. CC, RASE, SAM and SID were adopted to analyze the fusion results spectrally, and E, UIQI, SNR and AGwere adopted for spatial analysis. The quality metrics are necessary in measuring the performance of fusion results because qualitative (visual) analysis is not as precise as quantitative analysis, i.e.,qualitative analysis can be inaccurate due to observer s personal capabilities and preferences. Our quantitative analyses show that NSCT performs better than the other three fusion methods in transform domain. This result is convincing since NSCT has been developed lately to be a true 2-D transform which offers a more suitable way of constructing multiresolution, multiscale and multidirectional framework for fusion of discrete images. Last but not least, we performed additional experiments in order to analyze how the number of decomposition level affects the fusion performance.as a result,we observed that level-3 was enough to obtain satisfying fusion results. In other words, decomposition levels beyond level-3 did not significantly improve the fusion results. ACKNOWLEDGEMENTS This work was supported in part by the Nevada EPSCoR program NSF award #EPS-IIA-1301726 and by the DOD-DTRAGrant/Award #: HDTRA1-12-1-0033. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] S. Ibrahim and M. Wirth, Visible and IR Data Fusion Technique Using the Contourlet Transform, International conference on computational science and engineering, CSE 09, IEEE, vol. 2, pp. 42-47, 2009. MouyanZou and Yan Liu, Multi-Sensor Image Fusion: Difficulties and Key Techniques, 2nd International congress on image and signal processing, IEEE, pp. 1-5, 2009. G. Pajares and M. de la Cruz, A wavelet-based image fusion tutorial, Pattern Recognition, vol. 37, no. 9, pp. 1855 1872, 2004. A.A. Goshtasby, S. Nikolov, Image fusion: advances in the state of the art, Information Fusion 8 (2) (2007) 114 118. V.S. Petrovic, C.S. Xydeas, Gradient-based multiresolution image fusion, IEEE Transactions on Image Processing 13 (2) (2004) 228 237. G. V. Welland, Beyond Wavelets, Academic Press, 2003. E.J. Candes, D.L. Donoho, Curvelets: a surprisingly effective nonadaptive representation for objects with edges, in: A. Cohen, C. Rabut, L.L. Schumaker (Eds.), Curve and Surface Fitting, Vanderbilt University Press, Nashville, TN, USA, 2000, pp. 105 120. E.J. Candes, D.L. Donoho, New tight frames of curvelets and optimal representations of objects with C2 singularities, Communications on Pure and Applied Mathematics 75 (2) (2004) 219 266. E. Candes, L. Demanet, D. Donoho, L.X. Ying, Fast discrete curvelet transforms, SIAM MultiscaleModeling and Simulation 5 (3) (2006) 861 899. M. N. Do and M. Vetterli, The contourlet transform: An efficient directional multiresolution image representation, IEEE Transactions on Image Processing, vol. 14, no. 12, pp. 2091 2106, 2005. G. Piella, A general framework for multi-resolution image fusion: from pixels to regions, PNAR0211, ISSN 1386-3711, 2002. B. Jeon and D. A. Landrebe, Decision fusion approach for multitemporal classification, IEEE Transactions on Geoscience and Remote Sensing, vol. 37, no. 7, pp. 1227-1233, 1999. R. K. Sharma and M. Pavel, Adaptive and statistical image fusion, Society for information Display Digest of Technical Papers, vol. 27, pp. 969-972, 1996. I.W. Selesnick, R.G. Baraniuk, N.G. Kingsbury, The dual-tree complex wavelet transform, IEEE Signal Processing Magazine 22 (6) (2005) 123 151. 17

[15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] Burt P J., Merging images through pattern decomposition, Proceedings of SPIE, 575: 173-18, 1985. Bamberger R H., A filter bank for the directional decomposition of images: Theory and design, IEEE Trans. Signal Processing, 40 (4): 882-893, 1992. Aboubaker M. ALEjaily et al., Fusion of remote sensing images using contourlet transform, Innovations and Advanced Techniques in Systems, Computing Sciences and Software Engineering, Springer, pp. 213-218, 2008. Da Cunha A L, Zhou J, Do M N, The nonsubsampledcontourlet transform: theory, design, and applications, IEEE Transactions on Image Processing, 2006,15 (10) : 308923101. Qiang Zhang; Bao-Long Guo, "Research on Image Fusion Based on the Nonsubsampled Contourlet Transform," Control and Automation, 2007.ICCA 2007. IEEE International Conference on, vol., no., pp.3239,3243, May 30 2007-June 1 2007 V. Vijayaraj, C.G.O. Hara, N.H. Younan, Quality analysis of pansharpened images, Geosci.Remote Sens. Symp. (2004) 85 88. T. Ranchin and L. Wald, Fusion of high spatial and spectral resolution images: The ARSIS concept and its implementation, Photogramm. Eng. Remote Sens., vol. 66, no. 1, pp. 49 61, Jan. 2000. H.Z.M. Shafri, A. Suhaili, S. Manso, The performance of Maximum likelihood spectral angle mapper neural network and decision tree classifiers in hyperspectral image analysis, J. Comput. Sci. 3 (6) (2007) 419 423. C.I. Chang, An information theoretic-based approach to spectral variability, similarity and discriminability for hyperspectral image analysis, IEEE Trans. Inf. Theory 46 (5) (2000) 1927 1932. Y. Chen, Z. Xue, R.S. Blum, Theoretical analysis of an information-based quality measure for image fusion, Inf. Fus. 2 (2008) 161 175. J.W. Roberts, J. Van Aardt, F. Ahmed, Assessment of image fusion procedures using entropy, image quality, and multispectral classification, J. Appl. Remote Sens. 1 (2008) 023522. Z. Wang and A. C. Bovik, A universal image quality index, IEEE Signal Process.Lett., vol. 9, no. 3, pp. 81 84, Mar. 2002. Li, Z., Jing, Z., Yang, X., Sun, S., Color transfer based remote sensing image fusion using nonseparable wavelet frame transform, Pattern Recognition Lett. 26 (13) (2005) 2006 2014. http://studio.gge.unb.ca/unb/zoomview/examples.html M. Eismann, R. Hardie, Application of the stochastic mixing model to hyperspectral resolution enhancement, IEEE Transactions on Geoscience and Remote Sensing 42 (9) (2004) 1924 1933. MultiSpec, https://engineering.purdue.edu/~biehl/multispec/hyperspectral.html 18