FUSION OF TWO IMAGES BASED ON WAVELET TRANSFORM

Similar documents
University of Bristol - Explore Bristol Research. Peer reviewed version. Link to published version (if available): /ICIP.2005.

Comparative Evaluation of DWT and DT-CWT for Image Fusion and De-noising

Medical Image Fusion Using Discrete Wavelet Transform

Image Fusion Using Double Density Discrete Wavelet Transform

Medical Image Fusion using Rayleigh Contrast Limited Adaptive Histogram Equalization and Ant Colony Edge Method

PET AND MRI BRAIN IMAGE FUSION USING REDUNDANT WAVELET TRANSFORM

Comparison of Fusion Algorithms for Fusion of CT and MRI Images

COMPARATIVE STUDY OF IMAGE FUSION TECHNIQUES IN SPATIAL AND TRANSFORM DOMAIN

DETECTION OF IMAGE PAIRS USING CO-SALIENCY MODEL

Multimodal Medical Image Fusion Based on Lifting Wavelet Transform and Neuro Fuzzy

Optimal Decomposition Level of Discrete, Stationary and Dual Tree Complex Wavelet Transform for Pixel based Fusion of Multi-focused Images

Image Compression. -The idea is to remove redundant data from the image (i.e., data which do not affect image quality significantly)

Motion Blur Image Fusion Using Discrete Wavelate Transformation

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

Chapter 3 Set Redundancy in Magnetic Resonance Brain Images

Comparative Analysis of Discrete Wavelet Transform and Complex Wavelet Transform For Image Fusion and De-Noising

Performance Evaluation of Fusion of Infrared and Visible Images

A Novel NSCT Based Medical Image Fusion Technique

Implementation & comparative study of different fusion techniques (WAVELET, IHS, PCA)

CURVELET FUSION OF MR AND CT IMAGES

International Journal of Engineering Research-Online A Peer Reviewed International Journal Articles available online

Image Fusion Based on Wavelet and Curvelet Transform

An Effective Multi-Focus Medical Image Fusion Using Dual Tree Compactly Supported Shear-let Transform Based on Local Energy Means

MULTIMODAL MEDICAL IMAGE FUSION BASED ON HYBRID FUSION METHOD

Volume 2, Issue 9, September 2014 ISSN

Implementation of Hybrid Model Image Fusion Algorithm

Region Based Image Fusion Using SVM

Digital Image Processing. Chapter 7: Wavelets and Multiresolution Processing ( )

A Novel Algorithm for Color Image matching using Wavelet-SIFT

Wavelet Transform Fusion Several wavelet based techniques for fusion of -D images have been described in the literature [4, 5, 6, 7, 8, 3]. In all wav

Comparative Analysis of Image Compression Using Wavelet and Ridgelet Transform

ENHANCED IMAGE FUSION ALGORITHM USING LAPLACIAN PYRAMID U.Sudheer Kumar* 1, Dr. B.R.Vikram 2, Prakash J Patil 3

Comparative Study of Dual-Tree Complex Wavelet Transform and Double Density Complex Wavelet Transform for Image Denoising Using Wavelet-Domain

Biomedical Image Processing

Histogram and watershed based segmentation of color images

FPGA IMPLEMENTATION OF IMAGE FUSION USING DWT FOR REMOTE SENSING APPLICATION

Correspondence Detection Using Wavelet-Based Attribute Vectors

Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest.

Multimedia Retrieval Ch 5 Image Processing. Anne Ylinen

Texture Based Image Segmentation and analysis of medical image

Image Enhancement Techniques for Fingerprint Identification

Separate CT-Reconstruction for Orientation and Position Adaptive Wavelet Denoising

Chapter 3: Intensity Transformations and Spatial Filtering

Biometric Security System Using Palm print

Medical Image Segmentation Based on Mutual Information Maximization

Multi-Focus Medical Image Fusion using Tetrolet Transform based on Global Thresholding Approach

Fusion of Multimodality Medical Images Using Combined Activity Level Measurement and Contourlet Transform

Implementation of Image Fusion Algorithm Using Laplace Transform

EE795: Computer Vision and Intelligent Systems

Copyright 2005 Center for Imaging Science Rochester Institute of Technology Rochester, NY

Multi Focus Image Fusion Using Joint Sparse Representation

A Modified SVD-DCT Method for Enhancement of Low Contrast Satellite Images

ECG782: Multidimensional Digital Signal Processing

CONTRAST ENHANCEMENT FOR PCA FUSION OF MEDICAL IMAGES

HYBRID TRANSFORMATION TECHNIQUE FOR IMAGE COMPRESSION

IMPLEMENTATION OF DISTRIBUTED CANNY EDGE DETECTOR ON FPGA

Domain. Faculty of. Abstract. is desirable to fuse. the for. algorithms based popular. The key. combination, the. A prominent. the

Journal of Chemical and Pharmaceutical Research, 2013, 5(12): Research Article. Medical image fusion with adaptive shape feature

Real-Time Fusion of Multi-Focus Images for Visual Sensor Networks

FPGA Implementation of Brain Tumor Detection Using Lifting Based DWT Architectures

Available Online through

IMAGE COMPRESSION TECHNIQUES

EDGE DETECTION IN MEDICAL IMAGES USING THE WAVELET TRANSFORM

Multi-focus image fusion using de-noising and sharpness criterion

Comparison of Feature Detection and Matching Approaches: SIFT and SURF

FEATURE EXTRACTION TECHNIQUES FOR IMAGE RETRIEVAL USING HAAR AND GLCM

Lecture 3 - Intensity transformation

Histograms. h(r k ) = n k. p(r k )= n k /NM. Histogram: number of times intensity level rk appears in the image

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

A Model-Independent, Multi-Image Approach to MR Inhomogeneity Correction

MEDICAL IMAGE ANALYSIS

Image Compression & Decompression using DWT & IDWT Algorithm in Verilog HDL

Feature extraction. Bi-Histogram Binarization Entropy. What is texture Texture primitives. Filter banks 2D Fourier Transform Wavlet maxima points

An Introduction to Content Based Image Retrieval

Evaluation of texture features for image segmentation

Image denoising in the wavelet domain using Improved Neigh-shrink

ISSN (ONLINE): , VOLUME-3, ISSUE-1,

Global Thresholding Techniques to Classify Dead Cells in Diffusion Weighted Magnetic Resonant Images

Implementation of Canny Edge Detection Algorithm on FPGA and displaying Image through VGA Interface

PROSTATE CANCER DETECTION USING LABEL IMAGE CONSTRAINED MULTIATLAS SELECTION

WAVELET BASED THRESHOLDING FOR IMAGE DENOISING IN MRI IMAGE

Digital Image Processing, 3rd ed.

IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY. Scientific Journal Impact Factor: (ISRA), Impact Factor: 2.

Edge detection in medical images using the Wavelet Transform

Texture Segmentation by using Haar Wavelets and K-means Algorithm

Implementation and Comparison of Feature Detection Methods in Image Mosaicing

Content Based Image Retrieval Using Combined Color & Texture Features

Deformable Registration Using Scale Space Keypoints

Annales UMCS Informatica AI 1 (2003) UMCS. Registration of CT and MRI brain images. Karol Kuczyński, Paweł Mikołajczak

Modified Watershed Segmentation with Denoising of Medical Images

Outlines. Medical Image Processing Using Transforms. 4. Transform in image space

Interpolation of 3D magnetic resonance data

Image Enhancement in Spatial Domain (Chapter 3)

Color and Texture Feature For Content Based Image Retrieval

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS

Subpixel Corner Detection Using Spatial Moment 1)

A HYBRID APPROACH OF WAVELETS FOR EFFECTIVE IMAGE FUSION FOR MULTIMODAL MEDICAL IMAGES

Denoising and Edge Detection Using Sobelmethod

Multi-focus Image Fusion Using Stationary Wavelet Transform (SWT) with Principal Component Analysis (PCA)

ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION

Transcription:

FUSION OF TWO IMAGES BASED ON WAVELET TRANSFORM Pavithra C 1 Dr. S. Bhargavi 2 Student, Department of Electronics and Communication, S.J.C. Institute of Technology,Chickballapur,Karnataka,India 1 Professor, Department of Electronics and Communication, S.J.C. Institute of echnology,chickballapur,karnataka,india 2 Abstract : In this paper we have presented a method for fusing two dimensional multi-resolution 2-D images using wavelet transform under the combine gradient and smoothness criterion. The usefulness of the method has been illustrated using various experimental image pairs such as the multi- focus images, multi-sensor satellite image and CT and MR images of cross-section of human brain. The results of the proposed method have been compared with that of some widely used wavelet transform based image fusion methods both qualitatively and quantitatively. Experimental results reveal that the proposed method produces better fused image than that by the latter. The use of both gradient and relative smoothness criterion ensures two fold effects. While the gradient criterion ensure that edges in the images are included in the fused algorithm, the relative smoothness criterion ensures that the areas of uniform intensity are also incorporated in the fused image thus the effect of noise is minimized. It should be noted that the proposed algorithm is domain-independent Keywords: image fusion, wavelet transform, fused images, wavelet based fusion, multi-resolution I. INTRODUCTION Image fusion is the process by which two or more images are combined into a single image retaining the important features from each of the original images. The fusion of images is often required for images acquired from different instrument modalities or capture techniques of the same scene or objects (like multi-sensor, multi-focus and multimodal images). For example, in multi-focus imaging one or more objects may be in-focus in a particular image, while other objects in the scene may be in focus in other images. For remotely sensed images, some have good spectral information whereas others have high geometric resolution. In the arena of biomedical imaging, two widely used modalities, namely the magnetic resonance imaging (MRI) and the computed tomography (CT) scan do not reveal identically every detail of brain structure. While CT scan is especially suitable for imaging bone structure and hard tissues, the MR images are much superior in depicting the soft tissues in the brain that play very important roles in detecting diseases affecting the skull base. These images are thus complementary in many ways and no single image is totally sufficient in terms of their respective information content. The advantages these images may be fully exploited by integrating the complementary features seen in different images through the process of image fusion that generates an image composed of features that are best detected or represented in the individual images. Important applications of the fusion of images include medical imaging, microscopic imaging, remote sensing, computer vision, and robotics. The first step toward fusion, which may be interpreted as a preprocessing step is the registration which brings down the constituting images to a common coordinate system as fusion of images is meaningful only when common objects in images have identical geometric configuration with respect to size, location and orientation in all the images. In the next step, the images are combined to form a single fused image through a judicious selection of proportions of different features from different images. Fusion techniques include the simplest method of pixel averaging to more complicated methods such as principal component analysis and wavelet transform fusion. Several approaches to image fusion can be distinguished, depending on whether the images are fused in the spatial domain or they are transformed into another domain, and their transforms fused. Li et al. [1] have suggested a multisensory image fusion using wavelet transform in which a cascaded sequence of forward and reverse wavelet transform on multimodal images produces a fused image. Other common wavelet transform based fusion schemes include maximum selection (MS) just picks the wavelet transform coefficient in each sub-band with the largest magnitude. Burt and Kolczynski used a normalized correlation between the two images sub- Copyright to IJIRSET www.ijirset.com 1814

bands over a small local-area and the resultant coefficient for reconstruction is calculated from this measure via a weighted average of the two images coefficients. Zu Shu-long [7] proposed wavelet based fusion approach using gradient criteria, while Hill et al. [6] achieved fusion through the application of the shift invariant and directionally selective Dual Tree Complex Wavelet Transform (DT-CWT). In this paper, an image fusion algorithm based on wavelet transform is proposed. In the proposed scheme, the images to be processed are decomposed into sub-images with the same resolution at same levels and different resolution at different levels and then the information fusion is performed using high-frequency sub-images under the combined gradient and relative smoothness criterion and finally these sub-images are reconstructed into a resultant image having plentiful information. The developed scheme is applied to fuse multi-focus, multi-modal and remotely sensed multi-sensor images. Any kind of image fusion method involves registration as a first and foremost step. In general, different sensors respond to scene characteristics in different and, partially, complementary ways. II. WAVELET TRANSFORM AND WAVELET BASED FUSION Wavelet transform is a powerful mathematical tool used in the fields of signal processing. It is used to divide the given function or signal into different scale components such that each scale component can be studied with a resolution that it matches. Mallat used the wavelets to be the foundation of new powerful approach to signal processing and analysis called the Multi-resolution Theory. The same approach has been extended to multi-dimensional signal decomposition. In a multi-focus and multi-sensor image acquisition system, the size, orientation and location of an object relative to its own background may not be identical in all the images of different modalities. Integration or fusion of multi-focus or multi-sensor information is possible only if the images are registered or positioned with respect to a common coordinate system. Image registration (in case of fusing two images) is the process of determining correspondence between all points in two images of the same scene or object. The most common form of transform image fusion is wavelet transform fusion. In common with all transform domain fusion techniques the transformed images are combined in the transform domain using a defined fusion rule then transformed back to the spatial domain to give the resulting fused image. Wavelet transform fusion is more formally defined by considering the wavelet transforms w of the n registered input images Ij(x,y), j=1,2 n together with the fusion rule f. Then, the inverse wavelet transform w -1 is computed, and the fused resulting image I(x,y) is reconstructed as depicted in figure 1. Figure 1: Fusion of wavelet transforms of Images III. PROPOSED IMAGE FUSION ALGORITHM We propose a scheme for fusion of n registered set of images of same scene obtained through different modalities. The basic idea is to decompose each registered image into sub-images using forward wavelet transform which have same resolution at same level and different resolution at different levels. Information fusion is performed based on the high frequency (detailed coefficients) sub-images and resulting image is obtained using inverse wavelet transform. The proposed scheme uses two criterions namely the gradient and smoothness measure which we discuss first. Copyright to IJIRSET www.ijirset.com 1815

A. THE GRADIENT CRITERION Given a gray image I(x, y), the gradient at any pixel location (x, y) is obtained by applying two dimensional direction derivative I x, y = Gx Gy = I x,y x I(x,y) y The gradient magnitude and two fast decreasing approximations can be obtained as: B The Relative Smoothness Criterion The second criterion of relative smoothness uses the statistical moments of gray-level histogram of the region in the neighborhood of the pixel (x, y). Let z be the random variable denoting gray levels and p(zi), i=1, 2 L-1, be the corresponding histogram of the region in the neighborhood of pixel (x, y), where L is the number of distinct gray levels. The second moment or variance of z about mean is given by where m is the mean gray level of z given by The measure for relative smoothness in the neighborhood of pixel (x, y) can be thus established as 1 R(x,y)=1-1+σ 2 z The measure R(x, y) nears zero in neighborhoods of constant intensity and approaches one for large variances. C. IMAGE FUSION Let I 1 (x, y), I 2 (x, y), in(x, y) be the n registered images to be fused. The decomposed low frequency also referred to as the approximation coefficient sub-images be li 1j (x,y), li 2j (x,y), li nj (x,y) and decomposed high frequency also called as the detailed coefficients be hi 1j k (x,y,), hi 2j k (x,y), hi nj k (x,y) respectively, where j is the parameter of resolution, j=1,2 J and for every j, k = 1,2,3 which represent directional sensitive wavelet decomposition namely along horizontal, vertical and diagonal directions. For every even parameter of resolution i.e. j=2, 4 J, the magnitude of the gradient of the image generated from the high frequency components be GIijk(x, y), i=1, 2 n. For every odd parameter of resolution i.e. j=1, 3, J, the relative smoothness of the image generated from the high frequency components be RI ij k (x, y), i=1, 2 n. The fused high frequency sub-images are evaluated as: I n k (x,y)= hi pj k (x,y) Where GI pj k (x,y)=max i {GI ij k (x,y)} For j=2,4.j, k=1,2,3 I j k (x,y)=hipj k (x,y) If RI pj k (x,y)=maxi{ri ij k (x,y)} For j=1,3..j, k=1,2,3 The fused low frequency sub-images are: where ci is the parameter which determine the contribution from each image. Copyright to IJIRSET www.ijirset.com 1816

Using the inverse wavelet transform, and fused high and low frequency components evaluated above we reconstruct the image. This reconstructed image has information integrated from all the different images sources. This scheme shows the high frequency image fusion between images under the combined gradient and relative smoothness criterion. The same method can be used for fusing colored images. We can covert the colored images from RGB (Red, Green, Blue) to HIS (Hue, Saturation, Intensity) space and process only the Intensity component of the HIS space to acquire the fused image. A. THE CORRELATION MEASURE IV. DISCUSSION AND ANALYSIS The fusing ability of the algorithm is measured quantitatively by means of, say, pixel-gray-level correlation between two images. The correlation between two images f(x, y) and g(x,y) is defined as CR f, g = xy f x, y f g x, y g xy (f x, y f) 2 xy(g x, y g )2 And N is the total number of pixels in either of the images. The Haar Wavelet was used as the mother wave and the resolution parameter j was taken up to 6. The use of Haar wavelet was for the ease of implementation. Our aim was to go for a multi-resolution description of the image prior to fusion for which we need a filter, without loss of generality we chose Haar Wavelet.The fused image enjoys relatively high correlation with either of the images. This implies features of both the images are transported to the fused image. We did not restrict our algorithm to work on any specific type/class of images.equal contribution of each low frequency subimage was considered in the experiments. In the first experiment a pair of multi-focus image was taken. In figure 2(a) the clock in front is in focus while in figure 2(b) the clock at the back is focused. Figure 2(c) shows the image obtained after fusion. CT-MR image pair was taken up for second experiment. Figure 3(a) is the CT image of the human brain showing bones and hard tissue. Figure 3(b) is the MR image showin soft tissues.the fused CT-MR image is shown in figure3(c). Fig. 2(a) Fig. 2(b) Fig. 2(c) Fig.3(a) Fig.3(b) Fig.3(c) Copyright to IJIRSET www.ijirset.com 1817

Figure 4 shows the resultant fused images obtained from various schemes. Fig.4(a) Fig.4(b) Fig.4(c) Fig.4(d) Figure 4: (a) fused image under proposed scheme, (b) fused image under MS scheme, (c) fused image under GS scheme, (d) fused image under WA scheme In our experiment the computed values of CR (FuseImg, OrgImg1), CR (FuseImg, OrgImg2) and CR (OrgImg1, OrgImg2) is computed and is shown in TABLE I below. S NO CR(Orimg1. Orimg2) CR(Fusimg, Orgimg1) CR(Fusimg, Orgimg2) 1 0.9544 0.9887 0.9866 2-0.0047 0.5769 0.7768 3 0.5923 0.8115 0.8942 TABLE I. CORRELATION MEASURES The fused image enjoys relatively high correlation with either of the images. This implies features of both the images are transported to the fused image. B. COMPARISON WITH EXISTING FUSION ALGORITHMS Wavelet based image fusion is one of the most commonly used methods to achieve image fusion. Many different schemes have been proposed for the same. The following three previously developed fusion rule schemes were implemented using discrete wavelet transform based image fusion and the results were compared with the proposed scheme. (1) Maximum selection (MS) scheme: This simple scheme just picks the coefficient in each sub-band with the largest magnitude. (2) Weighted average (WA) scheme: The resultant coefficient for reconstruction is calculated from weighted average of the two images coefficients. (3) Gradient criterion selection (GS) scheme: This was proposed by Zhu Shu-long in [7]. The scheme picks the coefficient in each sub-band with the largest magnitude of the gradient defined over the neighborhood. Correlation coefficient is estimated by as shown above equation and compare with existing techniques. The comparative results of measure of correlation are shown in the below TABLEII. Copyright to IJIRSET www.ijirset.com 1818

Fusion Scheme CR(FuseImg, OrgImg1 CR(FuseImg, OrgImg2 Proposed Scheme 0.9887 0.9866 MS 0.9885 0.9864 WA 0.9885 0.9866 GS 0.9883 0.9867 TABLE II. CORRELATION MEASURE BETWEEN FUSION SCHEMES V. CONCLUSIONS The fusion of images is the process of combining two or more images into a single image retaining important features from each of the images. A scheme for fusion of multi-resolution 2D gray level images based on wavelet transform is presented in this project. If the images are not already registered, a point-based registration, using affine transformation is performed prior to fusion. The images to be fused are first decomposed into sub images with different frequency and then information fusion is performed using these images under the proposed gradient and relative smoothness criterion. Finally these sub images are reconstructed into the result image with plentiful information. A quantitative measure of the degree of fusion is estimated by cross-correlation coefficient and comparison with some of the existing wavelet transform based image fusion techniques is carried out. It should be noted that the proposed algorithm is domainindependent. That means it uses knowledge of neither the imaging device nor the objects being imaged. Therefore, it can be applied to fusion of different kinds of multi-modal images. Second, as the actual fusion is done during the construction of modified coefficients, the scheme has been extended to fusion of n images as already proposed in the algorithm. As stated earlier we can use the same method for fusing colored images. We can covert the colored images from RGB (Red, Green, Blue) to HIS (Hue, Saturation, Intensity) space and process only the Intensity component of the HIS space to acquire the fused image However, our method is computationally more expensive and needs more space for implementation. We wish to extend our work for fusing multi-resolution images. REFERENCES [1] Stephane G. Mallat, A Theory for Multi-resolution Signal Decomposition: The Wavelet Representation, IEEE Transaction on Pattern Analysis andd Machine Intelligence. Vol. II No. 7, pages 671-693, July 1989 [2] A. Colligon, D. Vandermeulen, P. Seutens, G. Marchal, Registration of 3D multimodality medical imaging using surfaces and point landmarks, Pattern Recognition Lett. 15, pp.461-467, (1994). [3] S. Banerjee, D.P. Mukherjee, D. Dutta Majumdar, Point landmarks for the registration of CT and MR images, Pattern Recognition Lett. 16, 1033-1042, (1995) [4] H. Li, B.S. Manjunath, S.K. Mitra, Multisensor image fusion using the wavelet transform, GMIP: Graphical Models Image Process. 57(3) pp.235-245 (1995) [5] A.A. Goshtasby, J.L. Moigne, Image registration Guest Editor's introduction, Pattern Recognition 32, 1-2, (1999). [6]PAUL Hill, Nishan Canagarajah and Dave Bull, Image Fusion using Complex Wavelets, BVMC 2002 [7] Zhu Shu-long, Image Fusion Using Wavelet Transform, Symposium on Geospatial Theory, Process and Applications, Ottawa 2002. [8] Paul Hill, Nishan Canagarajah and Dave Bull, Image Fusion using Complex Wavelets, BVMC 2002. BIOGRAPHY PAVITHRA C Perceiving M.Tech Degree in Signal Processing From VTU University, Belgum, Karnataka, INDIA. Dr.S.Bhargavi is presently working as a Professor in the department of Electronics and Communication engineering, SJCIT, Chikballapur, Karnataka, India. She is having 13 years of teaching experience. Her areas of interest are Communication systems, Networksecurity, Robotics, Embedded Systems, Protocol Engineering, Image Processing, Low Power VLSI, Wireless communication, ASIC and Cryptography. Copyright to IJIRSET www.ijirset.com 1819