Learn From The Proven Best!

Similar documents
Implementation of Image Fusion Algorithm Using Laplace Transform

Image Resolution Improvement By Using DWT & SWT Transform

Region Based Image Fusion Using SVM

Multi Focus Image Fusion Using Joint Sparse Representation

Comparative Evaluation of DWT and DT-CWT for Image Fusion and De-noising

CURVELET FUSION OF MR AND CT IMAGES

University of Bristol - Explore Bristol Research. Peer reviewed version. Link to published version (if available): /ICIP.2005.

ENHANCED IMAGE FUSION ALGORITHM USING LAPLACIAN PYRAMID U.Sudheer Kumar* 1, Dr. B.R.Vikram 2, Prakash J Patil 3

Comparison of Fusion Algorithms for Fusion of CT and MRI Images

Change Detection in Remotely Sensed Images Based on Image Fusion and Fuzzy Clustering

Performance Optimization of Image Fusion using Meta Heuristic Genetic Algorithm

Visible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness

Image Fusion Based on Wavelet and Curvelet Transform

Anisotropic Methods for Image Registration

Image Compression. -The idea is to remove redundant data from the image (i.e., data which do not affect image quality significantly)

HYPERSPECTRAL REMOTE SENSING

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Medical Image Fusion Using Discrete Wavelet Transform

An Effective Multi-Focus Medical Image Fusion Using Dual Tree Compactly Supported Shear-let Transform Based on Local Energy Means

IMAGE FUSION PARAMETER ESTIMATION AND COMPARISON BETWEEN SVD AND DWT TECHNIQUE

Orthogonal Wavelet Coefficient Precision and Fixed Point Representation

Image Compression & Decompression using DWT & IDWT Algorithm in Verilog HDL

Image Fusion Using Double Density Discrete Wavelet Transform

Motion Blur Image Fusion Using Discrete Wavelate Transformation

Fast Anomaly Detection Algorithms For Hyperspectral Images

Wavelet for Image Fusion

ANALYSIS OF SPIHT ALGORITHM FOR SATELLITE IMAGE COMPRESSION

Spectral or spatial quality for fused satellite imagery? A trade-off solution using the wavelet à trous algorithm

Image denoising in the wavelet domain using Improved Neigh-shrink

HYBRID TRANSFORMATION TECHNIQUE FOR IMAGE COMPRESSION

INTERNATIONAL JOURNAL OF GEOMATICS AND GEOSCIENCES Volume 2, No 2, 2011

Texture Analysis of Painted Strokes 1) Martin Lettner, Paul Kammerer, Robert Sablatnig

A Toolbox for Teaching Image Fusion in Matlab

IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY. Scientific Journal Impact Factor: (ISRA), Impact Factor: 2.

DATA FUSION FOR MULTI-SCALE COLOUR 3D SATELLITE IMAGE GENERATION AND GLOBAL 3D VISUALIZATION

DATA FUSION AND INTEGRATION FOR MULTI-RESOLUTION ONLINE 3D ENVIRONMENTAL MONITORING

Content Based Image Retrieval Using Curvelet Transform

A SCALABLE SPIHT-BASED MULTISPECTRAL IMAGE COMPRESSION TECHNIQUE. Fouad Khelifi, Ahmed Bouridane, and Fatih Kurugollu

A HYBRID APPROACH OF WAVELETS FOR EFFECTIVE IMAGE FUSION FOR MULTIMODAL MEDICAL IMAGES

Practical Realization of Complex Wavelet Transform Based Filter for Image Fusion and De-noising Rudra Pratap Singh Chauhan, Dr.

COMPARATIVE STUDY OF IMAGE FUSION TECHNIQUES IN SPATIAL AND TRANSFORM DOMAIN

IMAGE PROCESSING USING DISCRETE WAVELET TRANSFORM

Remote Sensing Introduction to the course

ATI Material Do Not Duplicate ATI Material. www. ATIcourses.com. www. ATIcourses.com

Hyperspectral and Multispectral Image Fusion Using Local Spatial-Spectral Dictionary Pair

ENVI ANALYTICS ANSWERS YOU CAN TRUST

A Study and Evaluation of Transform Domain based Image Fusion Techniques for Visual Sensor Networks

International Journal of Computer Science and Mobile Computing

Implementation of Hybrid Model Image Fusion Algorithm

Optimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform

Comparison of DCT, DWT Haar, DWT Daub and Blocking Algorithm for Image Fusion

Hyperspectral Image Enhancement Based on Sensor Simulation and Vector Decomposition

A DUAL WATERMARKING USING DWT, DCT, SVED AND IMAGE FUSION

Dual Tree Complex Wavelet Transform (DTCWT) based Adaptive Interpolation Technique for Enhancement of Image Resolution

Performance Evaluation of Fusion of Infrared and Visible Images

ENHANCED DCT COMPRESSION TECHNIQUE USING VECTOR QUANTIZATION AND BAT ALGORITHM Er.Samiksha 1, Er. Anurag sharma 2

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

Compressive sensing image-fusion algorithm in wireless sensor networks based on blended basis functions

to ensure that both image processing and the intermediate representation of the coefficients are performed without significantly losing quality. The r

DIGITAL WATERMARKING FOR GRAY-LEVEL WATERMARKS

Feature Based Watermarking Algorithm by Adopting Arnold Transform

The Feature Analyst Extension for ERDAS IMAGINE

A NEW ROBUST IMAGE WATERMARKING SCHEME BASED ON DWT WITH SVD

UAV-based Remote Sensing Payload Comprehensive Validation System

Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi

Wavelet Based Image Retrieval Method

Comparison of Image Fusion Technique by Various Transform based Methods

Wavelet Based Image Compression, Pattern Recognition And Data Hiding

International Journal of Applied Sciences, Engineering and Management ISSN , Vol. 04, No. 05, September 2015, pp

Thematic Mapping with Remote Sensing Satellite Networks

IMAGE COMPRESSION USING HYBRID TRANSFORM TECHNIQUE

signal-to-noise ratio (PSNR), 2

WAVELET USE FOR IMAGE RESTORATION

Empirical Mode Decomposition Based Denoising by Customized Thresholding

Resolution Magnification Technique for Satellite Images Using DT- CWT and NLM

DIGITAL IMAGE WATERMARKING BASED ON A RELATION BETWEEN SPATIAL AND FREQUENCY DOMAINS

Comparative Analysis of Discrete Wavelet Transform and Complex Wavelet Transform For Image Fusion and De-Noising

Domain. Faculty of. Abstract. is desirable to fuse. the for. algorithms based popular. The key. combination, the. A prominent. the

Performance Analysis of Discrete Wavelet Transform based Audio Watermarking on Indian Classical Songs

Analisi di immagini iperspettrali satellitari multitemporali: metodi ed applicazioni

A Study on Multiresolution based Image Fusion Rules using Intuitionistic Fuzzy Sets

Robust biometric image watermarking for fingerprint and face template protection

Optimal Decomposition Level of Discrete, Stationary and Dual Tree Complex Wavelet Transform for Pixel based Fusion of Multi-focused Images

NSCT domain image fusion, denoising & K-means clustering for SAR image change detection

8- BAND HYPER-SPECTRAL IMAGE COMPRESSION USING EMBEDDED ZERO TREE WAVELET

FPGA IMPLEMENTATION OF IMAGE FUSION USING DWT FOR REMOTE SENSING APPLICATION

IMPROVED MOTION-BASED LOCALIZED SUPER RESOLUTION TECHNIQUE USING DISCRETE WAVELET TRANSFORM FOR LOW RESOLUTION VIDEO ENHANCEMENT

HYPERSPECTRAL PANSHARPENING USING CONVEX OPTIMIZATION AND COLLABORATIVE TOTAL VARIATION REGULARIZATION

Video Compression Method for On-Board Systems of Construction Robots

Keywords Wavelet decomposition, SIFT, Unibiometrics, Multibiometrics, Histogram Equalization.

ISSN (ONLINE): , VOLUME-3, ISSUE-1,

IMAGE ANALYSIS, CLASSIFICATION, and CHANGE DETECTION in REMOTE SENSING

International Journal of Research in Advent Technology Available Online at:

9/14/2011. Contents. Lecture 3: Spatial Data Acquisition in GIS. Dr. Bo Wu Learning Outcomes. Data Input Stream in GIS

CHAPTER 3 WAVELET DECOMPOSITION USING HAAR WAVELET

Wavelet Based Image Compression Using ROI SPIHT Coding

A 3-D Virtual SPIHT for Scalable Very Low Bit-Rate Embedded Video Compression

George Mason University School of Computational Sciences Center for Earth Observing and Space Research

Speech Modulation for Image Watermarking

PERFORMANCE COMPARISON OF DIFFERENT MULTI-RESOLUTION TRANSFORMS FOR IMAGE FUSION

Transcription:

Applied Technology Institute (ATIcourses.com) Stay Current In Your Field Broaden Your Knowledge Increase Productivity 349 Berkshire Drive Riva, Maryland 21140 888-501-2100 410-956-8805 Website: www.aticourses.com Email: ATI@ATIcourses.com Boost Your Skills With ATIcourses.com! ATI Provides Training In: Acoustic, Noise & Sonar Engineering Communications and Networking Engineering & Data Analysis Information Technology Radar, Missiles & Combat Systems Remote Sensing Signal Processing Space, Satellite & Aerospace Engineering Systems Engineering & Professional Development Check Our Schedule & Register Today! The Applied Technology Institute (ATIcourses.com) specializes in training programs for technical professionals. Our courses keep you current in stateof-the-art technology that is essential to keep your company on the cutting edge in today's highly competitive marketplace. Since 1984, ATI has earned the trust of training departments nationwide, and has presented On-site training at the maor Navy, Air Force and NASA centers, and for a large number of contractors. Our training increases effectiveness and productivity. Learn From The Proven Best!

Wavelet-based hyperspectral and multispectral image fusion Richard B. Gomez, Amin Jazaeri, and Menas Kafatos Center for Earth Observing and Space Research School of Computational Sciences George Mason University 4400 University Drive, Fairfax, VA 22030 USA ABSTRACT Different research groups have recently studied the concept of wavelet image fusion between panchromatic and multispectral images using different approaches. In this paper, a new approach using the wavelet based method for data fusion between hyperspectral and multispectral images is presented. Using this wavelet concept of hyperspectral and multispectral data fusion, we performed image fusion between two spectral levels of a hyperspectral image and one band of multispectral image. The reconstructed image has a root mean square error (RMSE) of 2.8 per pixel and a signal-to-noise ratio (SNR) of 36 db. We achieved our goal of creating a composite image that has the same spectral resolution as the hyperspectral image and the same spatial resolution as the multispectral image with minimum artifacts. Keywords: Hyperspectral, multispectral, wavelet, image fusion, data fusion 1. INTRODUCTION Hyperspectral and multispectral imaging systems are widely used in Earth observing systems and regional applications. A Hyperspectral imaging system provides detailed spectral information that enables the observer to detect and classify a pixel based on its spectral characteristics. However, in many cases, the spatial resolution of these systems is lower than a multispectral imaging system that has less spectral channels [1]. NASA s newly launched EO-1 spacecraft will provide researchers with space-based Earth observations with unique spatial and spectral characteristics. Hyperion is one of the key instruments on board this spacecraft capable of resolving 220 spectral bands (from 0.4 to 2.5 µm) with a 30-meter spatial resolution [2]. Although the spatial resolution of this instrument is better or equal to many conventional imaging systems such as Landsat, it is still less than the spatial resolution of SPOT 4 or Space Imaging s Ikonos. The SPOT 4 satellite carries two instruments, a high-resolution visible and infrared instrument and a vegetation instrument, that have 20 m pixel resolution in the multispectral mode [1]. Ikonos has 4 m spatial resolution for multispectral data. In many remote-sensing applications, it is imperative to maintain high spectral and spatial resolution [3]. Therefore, image fusion between hyperspectral and multispectral images becomes important to achieve such a goal. 2. IMAGE FUSION Image fusion refers to a process that extracts redundant and complementary information from a set of input images and fuses it into a single and more complete image. The fused image should have more useful information content. The fusion of the two images can take place at the signal, pixel, or feature level [4]. In this paper, the problem of image fusion at the pixel level is being addressed. One of the main issues in image fusion is image alignment, which refers to pixel-by-pixel alignment of the images. Another important issue is the dynamic range between the two hyperspectral and multispectral images. Since different sensors provide these images, the dynamic range of the two images must be rescaled to match one another. Different research groups have recently studied the concept of wavelet image fusion between panchromatic and multispectral images from different approaches. For details on their work interested readers are referred to [5], [6]. In the concept of hyperspectral and multispectral presented in this paper, image fusion will occur between two spectral levels of hyperspectral and one band of multispectral image. The reconstructed image will have the same spatial resolution as the multispectral image and the same spectral resolution as the hyperspectral image as shown in figure 1

Figure 1 2.1 Wavelet-based image fusion Wavelets are translation and dilation operators generated from a unique function ψ that decomposes a signal into a family of functions ψ s. These operators can be considered as low-pass and high-pass filters. The low-pass filter is a moving average and the high-pass filter is the moving difference [7]. The wavelet transform of a two dimensional image is shown in figure 2. As shown in this figure the original image has been decomposed in to different types of details and levels. A more detailed discussion on wavelet based image processing is provided in [8], [9], and [10]. Figure 2: Detailed wavelet coefficients at different levels

The wavelet based image fusion will be applied to a two-dimensional hyperspectral test image at each level as shown in figure 3. In this method pixel a will be fused with pixels g, i,, and k. In order to fuse one pixel with 4 pixels, pixel a will be inversed wavelet transformed to obtain coefficients a1, a2, a3, and a4. An architecture that is equally efficient at computing both the discrete wavelet transform (DWT) and the inverse discrete wavelet transform (IDWT) has been described in the literature [11]. Figure 3 These coefficients then will be added to four wavelet transform coefficients gv, hd, ia, and h that correspond to approximation and detail coefficients corresponding to g, h, i, and of the multispectral test image as shown in figure 4. In the next spectral level pixel b will be fused with g, h, i, and the same way that pixel a was fused. The reason is both pixel a and b in the hyperspectral data fall in to the same wavelength range as pixels g, h, i, and in the multispectral band. Figure 4

The process continues by fusing pixel c in hyperspectral level with pixels o, p, q, and r in the multispectral band. Pixels o, p, q, and r do not exist in the multispectral test image and are considered as the missing data. The values of these pixels are interpolated using the existing data from the existing multispectral test image bands and the corresponding hyperspectral test image. For instance, pixel o is the average between pixels g and k in the multispectral band and c and d in the corresponding hyperspectral level of test images. After finding the fused coefficients, each block of 4 coefficients will be inversed wavelet transformed and rescaled to produce 4 pixels in the corresponding level of the reconstructed image. The process continues until all the pixels in hyperspectral test image will become fused to their corresponding pixels in the multispectral test image. 3. RESULTS The above algorithm was applied to a hyperspectral test image of the size 64x64 with 28 spectral levels and a multispectral test image of the size 128x128 with 7 spectral bands. Both test images were constructed from a hyperspectral image with 128x128 resolution and 28 spectral levels. Unlike other techniques that rely on human perception, this approach provides an effective way of comparing the reconstructed image with a reference image. Each pixel in the hyperspectral test image is the average of 4 pixels in the original image, and each spectral band in the multispectral test image is the average of 2 levels in the original image. Instead of having 14 bands, the multispectral test image has only 7 bands. The other 7 bands were discarded to make the problem more realistic. The reconstructed image has 128x128 resolution with 28 spectral levels, the same as the original hyperspectral image as shown in figure 5. Figure 5 The original and reconstructed images for levels 9, 10 and 11 along with the difference between original and reconstructed images for the above mentioned levels are shown in figure 6. Since level 11 was reconstructed using an averaging scheme, it contains more artifacts than levels 9 and 10.

Figure 6 The spectral variation of an original pixel is depicted in figure 7. It is shown that most of the error between the original and reconstructed spectral information is coming form the averaged data. The root mean square error (RMSE) and signal to noise ratio (SNR) between the reconstructed and the original image was calculated using the following equations: RMSE = 1 ( X i Y 2, i, ) N i 2 where X is the original image and Y is the reconstructed image. The RMSE had the value of 2.487 per pixel and the SNR was 36.448 db. ( X i i, Y SNR = 10*log( 2 X i i, i, ) 2 )

Figure 7 4. CONCLUSIONS A new wavelet based method for data fusion between hyperspectral and multispectral images was studied. Based on the presented results it is shown that fusion between multispectral and hyperspectral images using wavelets produces highspectral and spatial images with minimum artifacts. Although the above method only used two input images for fusion, the algorithm can easily be extended to accommodate more input images. It is worthy to note that by using more advanced techniques to interpolate the missing data corresponding to the multispectral bands, achieving higher SNR and lower RMSE is highly possible.

REFERENCES 1. John A. Richards and Xiuping Jia, Remote Sensing Digital Image Analysis, Berlin, Germany: Springer-Verlag, 1999 2. NASA: http://eo1.gsfc.nasa.gov/technology/hyperion.html 3. Hui Li, B.S. Manunath and Sanit Mitra, Multi-Sensor Image Fuusion Using Wavelet Transform in Proceedings of the 1994 IEEE International Conference on Image Processing, vol. 1, pp. 51 55. 4. Lehigh University: http://www.eecs.lehigh.edu/spcrl/if/image_fusion.htm 5. Wang Zhiun, Li Deren, and Li Qingquann, Wavelet Analysis Based Image Fusion of Landsta-7 Panchromatic Image and Multi-spectral Images, preprint, 1999. 6. J. Nunez, X. Otazu, O. Fors, A. Prades, V. Pala, and R. Arbiol, Multiresolution-Based Image Fusion with Additive Wavelet Decomposition, IEEE Trans. On Geoscience and Remote Sensing, vol. 37, no. 3, May 1999. 7. G. Strang and T. Nguyen, Wavelets and Filter Banks, Wellesley, MA: Wellesley-Cambdrige Press, 1996. 8. J. L. Starck, F. Murtagh and A. Biaoui, Image Processing and Data Analysis, Cambridge, MA: Cambridge University Press, 1998. 9. Farid Hassaninia, Ismael Magana M, Francois Langevin, and Jean Pierre Kernevez, Image Fusion by an Orthognal Wavelet Transform and Comparison With Other Methods., In Proc. 1992 IEEE Int. Conf. Eng. in Medicine and Biology Society, vol.14, pp. 1246 1247. 10. Laura J. Chipman, Timothy M. Orr, and Lewis N. Graham, Wavelets and Image Fusion, In Proceedings of the 1995 IEEE International Conference on Image Processing, vol. 3, pp. 248 251. 11. M. Vishwanath and R. M. Owens, A Common Architecture for the DWT and IDWT, Proceedings of the 1996 International Conference on Application-Specific Systems, Architectures, and Processes (ASAP 96).