MULTIMODAL MEDICAL IMAGE FUSION BASED ON HYBRID FUSION METHOD

Similar documents
Medical Image Fusion using Rayleigh Contrast Limited Adaptive Histogram Equalization and Ant Colony Edge Method

A Novel NSCT Based Medical Image Fusion Technique

PET AND MRI BRAIN IMAGE FUSION USING REDUNDANT WAVELET TRANSFORM

Medical Image Fusion Using Discrete Wavelet Transform

Anna University-Villupuram campus, India. 2. Coimbatore Institute of Technology, Anna University, Chennai, India. 3

REVIEW ON MEDICAL IMAGE FUSION BASED ON NEURO-FUZZY APPROACH

Multimodel Medical Image Fusion in NSCT

MEDICAL IMAGE FUSION BASED ON TYPE-2 FUZZY LOGIC IN NSCT

Fusion of Multimodality Medical Images Using Combined Activity Level Measurement and Contourlet Transform

CHAPTER 6 ENHANCEMENT USING HYPERBOLIC TANGENT DIRECTIONAL FILTER BASED CONTOURLET

Comparison of Image Fusion Technique by Various Transform based Methods

Comparison of Fusion Algorithms for Fusion of CT and MRI Images

MEDICAL IMAGE FUSION OF MULTI MODAL IMAGES USING RANDOM BLOCK SELECTION METHOD

D. Sheefa Ruby Grace Dr. Mary Immaculate Sheela

A Study On Asphyxiating the Drawbacks of Wavelet Transform by Using Curvelet Transform

Implementation of Hybrid Model Image Fusion Algorithm

Image Fusion Using Double Density Discrete Wavelet Transform

Int. J. Pharm. Sci. Rev. Res., 34(2), September October 2015; Article No. 16, Pages: 93-97

CURVELET FUSION OF MR AND CT IMAGES

IMAGE SUPER RESOLUTION USING NON SUB-SAMPLE CONTOURLET TRANSFORM WITH LOCAL TERNARY PATTERN


Analysis of Fusion Techniques with Application to Biomedical Images: A Review

Multi Focus Image Fusion Using Joint Sparse Representation

Classification of Subject Motion for Improved Reconstruction of Dynamic Magnetic Resonance Imaging

Image Compression. -The idea is to remove redundant data from the image (i.e., data which do not affect image quality significantly)

Image Fusion Based on Medical Images Using DWT and PCA Methods

Biomedical Image Processing

Performance Evaluation of Fusion of Infrared and Visible Images

Hybrid Approach for MRI Human Head Scans Classification using HTT based SFTA Texture Feature Extraction Technique

Medical Image Fusion Algorithm based on Local Average Energy-Motivated PCNN in NSCT Domain

Dual Tree Complex Wavelet Transform and Robust Organizing Feature Map in Medical Image Fusion Technique

FUSION OF TWO IMAGES BASED ON WAVELET TRANSFORM

WEINER FILTER AND SUB-BLOCK DECOMPOSITION BASED IMAGE RESTORATION FOR MEDICAL APPLICATIONS

A WAVELET BASED BIOMEDICAL IMAGE COMPRESSION WITH ROI CODING

IMAGE ENHANCEMENT USING NONSUBSAMPLED CONTOURLET TRANSFORM

FPGA Implementation of Brain Tumor Detection Using Lifting Based DWT Architectures

An Effective Multi-Focus Medical Image Fusion Using Dual Tree Compactly Supported Shear-let Transform Based on Local Energy Means

NSCT domain image fusion, denoising & K-means clustering for SAR image change detection

MULTI-FOCUS AND MULTISPECTRAL ENHANCED IMAGE FUSION BASED ON PIXEL FEATURES USING DISCRETE COSINE HARMONIC WAVELET TRANSFORM AND MORPHOLOGICAL FILTER

A HYBRID APPROACH OF WAVELETS FOR EFFECTIVE IMAGE FUSION FOR MULTIMODAL MEDICAL IMAGES

TUMOR DETECTION IN MRI IMAGES

Multimodal Medical Image Fusion Based on Lifting Wavelet Transform and Neuro Fuzzy

Image Interpolation using Collaborative Filtering

Comparison of DCT, DWT Haar, DWT Daub and Blocking Algorithm for Image Fusion

Multifocus and multispectral image fusion based on pixel features using discrete cosine harmonic wavelet transformed and morphological filter

WAVELET BASED THRESHOLDING FOR IMAGE DENOISING IN MRI IMAGE

BME I5000: Biomedical Imaging

Computer-Aided Detection system for Hemorrhage contained region

Comparative Study of Dual-Tree Complex Wavelet Transform and Double Density Complex Wavelet Transform for Image Denoising Using Wavelet-Domain

ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N.

Medical Image Registration by Maximization of Mutual Information

A novel method for multi-modal fusion based image embedding and compression technique using CT/PET images.

RADIOMICS: potential role in the clinics and challenges

Extraction and Features of Tumour from MR brain images

Comparative Evaluation of DWT and DT-CWT for Image Fusion and De-noising

MEDICAL IMAGE NOISE REDUCTION AND REGION CONTRAST ENHANCEMENT USING PARTIAL DIFFERENTIAL EQUATIONS

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction

Multi-Focus Medical Image Fusion using Tetrolet Transform based on Global Thresholding Approach

An Approach for Reduction of Rain Streaks from a Single Image

Skin Infection Recognition using Curvelet

Image Contrast Enhancement in Wavelet Domain

Index. aliasing artifacts and noise in CT images, 200 measurement of projection data, nondiffracting

Volume 2, Issue 9, September 2014 ISSN

Denoising and Edge Detection Using Sobelmethod

Introduction to Medical Image Processing

Change Detection in Remotely Sensed Images Based on Image Fusion and Fuzzy Clustering

Performance Evaluation of Biorthogonal Wavelet Transform, DCT & PCA Based Image Fusion Techniques

CHAPTER 3 WAVELET DECOMPOSITION USING HAAR WAVELET

A Comparative Study between Two Hybrid Medical Image Compression Methods

International Journal of Advance Engineering and Research Development AN IMAGE FUSION USING WAVELET AND CURVELET TRANSFORMS

Novel Hybrid Multi Focus Image Fusion Based on Focused Area Detection

CHAPTER 3 DIFFERENT DOMAINS OF WATERMARKING. domain. In spatial domain the watermark bits directly added to the pixels of the cover

MEDICAL IMAGE ANALYSIS

Digital Image Processing

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

Image denoising using curvelet transform: an approach for edge preservation

Bridge Surface Crack Detection Method

Chapter 3 Set Redundancy in Magnetic Resonance Brain Images

CONTRAST ENHANCEMENT FOR PCA FUSION OF MEDICAL IMAGES

DENOISING OF COMPUTER TOMOGRAPHY IMAGES USING CURVELET TRANSFORM

Efficient Algorithm For Denoising Of Medical Images Using Discrete Wavelet Transforms

Computational Medical Imaging Analysis

COMPARATIVE STUDY OF IMAGE FUSION TECHNIQUES IN SPATIAL AND TRANSFORM DOMAIN

IMAGE PROCESSING USING DISCRETE WAVELET TRANSFORM

International Journal of Engineering Research-Online A Peer Reviewed International Journal Articles available online

A NEW ROBUST IMAGE WATERMARKING SCHEME BASED ON DWT WITH SVD

PERFORMANCE COMPARISON OF DIFFERENT MULTI-RESOLUTION TRANSFORMS FOR IMAGE FUSION

Robust Image Watermarking based on Discrete Wavelet Transform, Discrete Cosine Transform & Singular Value Decomposition

Global Journal of Engineering Science and Research Management

IMAGE COMPRESSION USING HYBRID TRANSFORM TECHNIQUE

A Comprehensive lossless modified compression in medical application on DICOM CT images

Image Processing Techniques for Brain Tumor Extraction from MRI Images using SVM Classifier

An Integrated Dictionary-Learning Entropy-Based Medical Image Fusion Framework

CHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION

Non-rigid Image Registration

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS

Digital Image Processing Lectures 1 & 2

Saurabh Tiwari Assistant Professor, Saroj Institute of Technology & Management, Lucknow, Uttar Pradesh, India

Chapter 7 UNSUPERVISED LEARNING TECHNIQUES FOR MAMMOGRAM CLASSIFICATION

Spatio-Temporal Registration of Biomedical Images by Computational Methods

Transcription:

MULTIMODAL MEDICAL IMAGE FUSION BASED ON HYBRID FUSION METHOD Sinija.T.S MTECH, Department of computer science Mohandas College of Engineering Karthik.M Assistant professor in CSE Mohandas College of Engineering Abstract---The importance of information offered by the medical images for diagnosis support can be increased by combining images from different compactable medical devices. Medical image fusion has been used to derive useful information from multimodality medical image data. Fused image will be represented in format capable for computer processing. The source medical images undergo a three level fusion process. Two different fusion rules based on phase congruency and directive contrast are proposed and used to fuse lowand high-frequency coefficients. Finally, the fused image is subjected to another combined fusion using Centralization Method. Experimental results and comparative study show that the proposed fusion framework provides an effective way to enable more accurate analysis of multimodality images. Further, the applicability of the proposed framework is carried out by the three clinical examples of persons affected with Alzheimer, subacute stroke and recurrent tumor. Keywords---Multimodal medical imaging, Phase congruency, Directive contrast, NSCT. 1. INTRODUCTION Processing the image and gathering the hidden information from a noised or blurred image can be carried out by various methods. Various techniques such as image fusion and super resolution enhance the image quality to show hidden information in processing the image. Fusion of two or more images of the same scene to form a single image is known as image fusion. Several fusion algorithms have been evolved such as pyramid based, wavelet based, curvelet based, HSI (Hue Saturation Intensity), color model, PCA (Principal Component Analysis) method. All of them lack in one criteria or the other [1]. Fusion of medical images should be taken carefully as the whole diagnosis process depends on it. Medical images should be of high resolution with maximum possible details [2]. The medical images should represent all important characteristics of the organ to be imaged so the integrated image should present maximum possible details. Therefore our aim is to adopt the best method of image fusion so that the diagnosis must be accurate and perfect. Medical imaging has become increasingly important in medical diagnosis, which enabled radiologists to quickly acquire images of human body and its internal structures with effective resolution. Different medical imaging techniques such as X-rays, computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET) provide different perspectives on human body. For example, CT scans provide dense structures like bones and implants with less distortion, MRI scans provide normal and pathological soft tissue within the body while PET scans provide better information on blood flow and flood activity with low spatial resolution. Therefore, an improved understanding of a patient condition can be achieved through the use of different imaging modalities. A powerful technique used in medical imaging analysis is medical image fusion, where streams of information from medical images of different modalities are combined into a single fused image. Medical image fusion plays important role in clinical applications such as image-guided surgery, image-guided radiotherapy, noninvasive diagnosis, and treatment planning.a single modality of medical image cannot provide comprehensive and accurate information. Combining anatomical and functional medical images provide more useful information through image fusion. The new fused image generated contains more accurate description of the scene than any of the individual source image and is more suitable for human visual and machine 3071

perception or further image processing and analysis tasks. Two images, image A and image B of same or different modalities are taken and by applying the various fusion methods, final fused image F is obtained which is more informative than single image. Multimodal medical image fusion not only helps in diagnosing diseases, but it also used to reduces the storage cost by reducing storage to a single fused image instead of multiple source images. There are some important requirements for the image fusion process [1]. The fused image should preserve all relevant information from the input images. The image fusion should not introduce artifacts which can lead to a wrong diagnosis. Recently, with the development of multiscale decomposition, wavelet transform has been identified ideal method for image fusion. However, it is argued that wavelet decomposition is good at isolated discontinuities, but not good at edges and textured region. Further, it captures limited directional information along vertical, horizontal and diagonal direction [21]. These issues are rectified in a recent multiscale decomposition Contourlet, and its nonsubsampled version. Contourlet is a true 2-D sparse representation for 2-D signals like images where sparse expansion is expressed by contour segments. The objective of the multimodality image fusion is to combine the complementary and redundant information from multiple images and generate one image which contains all the information which is present in all the source images so the resultant fused image has a better description than any other individual image. All the complementary information which is not present simultaneously in the single image can be observed in one image simultaneously which is a fused image using our proposed algorithm. The motivation behind fusing multimodality, multi-resolution images is to create a single enhanced image with improved interpretability and more suitable for the purpose of human visual perception, object detection and target recognition. In proposed method the source medical images undergo a three level fusion process. The source medical images are first transformed by Non sub sampled Contourlet transform (NSCT) followed by fusion of low frequency and high frequency components. Two different fusion rules based on phase congruency and Directive contrast are used to fuse low and high frequency coefficients. In second level, the source images are again transformed using wavelet transform. In third level the results of first and second level are fused together using another fusion rule, centralization method. This new fusion significantly reduces the amount of distortion and the loss of contrast information usually observed in fused images. 3. PROPOSED SYSTEM The Proposed method consists of three levels of fusion. First level fusion done with the NSCT Transform same as the existing method, then the source images undergone wavelet transform. The two outputs of transforms are fused together using a combined fusion method. There are mainly three steps involved in proposed algorithm. Image fusion in NSCT domain Image fusion in wavelet domain Fusion of above results 3.1 Basic Concepts 3.1.1 Non-Subsampled Contourlet Transform NSCT, based on the theory of CT, is a kind of multi-scale and multi-direction computation framework of the discrete images. It can be divided into two stages including non-subsampled pyramid (NSP) and non subsampled directional filter bank (NSDFB). The former stage ensures the multiscale property by using two-channel non-subsampled filter 3072

bank, and one low-frequency image and one highfrequency image can be produced at each NSP decomposition level. The subsequent NSP decomposition stages are carried out to decompose the low frequency component available iteratively to capture the singularities in the image. As a result, NSP can result in sub-images, which consists of one low- and high-frequency images having the same size as the source image where denotes the number of decomposition levels. The NSDFB is two-channel non-subsampled filter banks which are constructed by combining the directional fan filter banks. NSDFB allows the direction decomposition with stages in high-frequency images from NSP at each scale and produces directional sub-images with the same size as the source image. Therefore, the NSDFB offers the NSCT with the multi-direction property and provides us with more precise directional details information. The NSDFB is two-channel nonsubsampled filter banks which are constructed by combining the directional fan filter banks. NSDFB allows the direction decomposition with stages in high-frequency images from NSP at each scale and produces directional sub-images with the same size as the source image. Therefore, the NSDFB offers more precise directional details information. 3.1.2 Phase Congruency Phase congruency is a measure of feature perception in the images which provides an illumination and contrast invariant feature extraction method. This approach is based on the Local Energy Model, which postulates that significant features can be found at points in an image where the Fourier components are maximally in phase. Furthermore, the angle at which phase congruency occurs signifies the feature type. The phase congruency approach to feature perception has been used for feature detection. First, logarithmic Gabor filter banks at different discrete orientations are applied to the image and the local amplitude and phase at a point(x,y) are obtained. The main properties, which acted as the motivation to use phase congruency for multimodal fusion, are as follows. The phase congruency is invariant to different pixel intensity mappings. The images captured with different modalities have significantly different pixel mappings,even if the object is same. Therefore, a feature that is free from pixel mapping must be preferred. The phase congruency feature is invariant to illumination and contrast changes. The capturing environment of different modalities varies and resulted in the changeof illumination and contrast. Therefore, multimodal fusion can be benefitted by an illumination and contrast invariant feature. The edges and corners in the images are identified by collecting frequency components of the image that are in phase. As we know, phase congruency gives the Fourier components that are maximally in phase. Therefore, phase congruency provides the improved localization of the image features, which lead to efficient fusion. 3.1.4 Wavelet Transform Wavelet Transforms are based on small wavelets with limited duration. The power of a signal concentrates more likely in the low frequency components. Power of (t) is more compact at low frequencies while the power of φ(t) concentrates at relatively high frequencies. In formal (t) is called scaling function to do approximation and φ (t) is called wavelet function to find the details. Fusion steps Transform is applied on registered images This operation generates coefficient for images. A fusion rule has to be established and applied on these coefficients. 3073

The fused image is obtained using inverse transform. We transform input images say A and B using wavelet Transform. It decomposes a signal into a set of basis function(wavelets). Wavelets are functions defined over a finite interval. Magnitudes of fluctuations are often smaller than those of original signal.so for the detailed vision of information it is better to prefer Haar Wavelet Transform. Haar Wavelet Transform which decompose signal into two sub signals of half of its length ie, into two sub signals of half of its length Running average and Running difference. so each minute fluctuations in the image can be identified. Figure, Haar Wavelet Transform 3.1.4 Proposed Algorithm Consider two perfectly registered images. Step 1: Input two source images A and B. Step 2: Images undergo some preprocessing steps, Image De-noising, using wiener filter Image Enhancement, using Adaptive histogram This step results images with better visual effect. Step 3: Perform NSCT Transform on the source images A and B, to obtain one low frequency and a series of high frequency images Fusion of low frequency Fusion of high frequency Perform inverse NSCT transform on fused low and high frequency Store the result in variable F. Step 4: Perform wavelet transform of the source image A and B then the fused image is stored on variable C. Step 5: Combined fusion using centralization method which makes the mean and standard deviation of the two transformed images, common. Using the equation, Thus get the final fused image in D. 4. SYSTEM TESTING AND EXPERIMENTAL RESULTS Some general requirements for fusion algorithm are: It should be able to extract complimentary features from input images. It must not introduce artifacts or inconsistencies according to Human Visual System. It should be robust and reliable. Generally, these can be evaluated subjectively or objectively. The former relies on human visual characteristics and the specialized knowledge of the observer, hence vague, time-consuming and poorrepeatable but are typically accurate if performed correctly. The other one is relatively formal and easily realized by the computer algorithms, which generally evaluate the similarity between the fused and source images. However, selecting a proper consistent criterion with the subjective assessment of the image quality is rigorous. Hence, there is a need to create an evaluation system. Therefore, first an evaluation index system is established to evaluate the proposed fusion algorithm. These indices are determined according to the statistical parameters. 5.1 Performance Results of some clinical examples 5.1.1 Experiment with data set 1: Alzheimer Results of fusion of scan images of Brain affected Alzheimer. He had become lost on several occasions, and had difficulty orienting himself in unfamiliar circumstances. This man is affected by the diseases namely Alzheimer. In Figure the first two column in image shows the input images, Image A and B, third column shows the fusion result using previous method NSCT and fourth Hybrid method. 3074

Figure 5.1: Fused images of MRI and SPECT images of Brain affected Alzheimer. 5.1.2 Experiment with data set 2: Stroke Results of fusion of scan images of Brain affected Stroke. Stroke is due to blood clot in blood vessels or due to high Blood pressure and blood vessels may rupture.mri scan is used to find the type of stroke. In Figure the first two column in image shows the input images, Image A and B, third column shows the fusion result using previous method NSCT and fourth column shows the fused result using proposed Hybrid method. Figure 5.3: Fused images of MRI and SPECT images of Brain affected tumor 5.1.4 Experiment with data set 4:Glioblastoma Glioblastoma is a Type of serious cancerous Tumor. In Figure the first two column in image shows the input images, Image A and B,third column shows the fusion result using previous method NSCT and fourth column shows the fused result using proposed Hybrid method. From the comparison of fused image using NSCT and Hybrid method it is clear that the hybrid method provide better visual effect and information content. Figure 5.4: Fused Scan images of Brain affected Glioblastoma. Figure 5.2: Fused images of MRI and SPECT images of Brain affected stroke. 5.1.3 Experiment with data set 3: Tumor Tumor occurs due to undefined growth of cells as it results in loss of visual effects and hemiparesis. Through MRI scan,the presence of active tumor can be found. SPECT scan aggressive growth and possibility of spreading can be found. In Figure the first two column in image shows the input images, Image A and B, third column shows the fusion result using previous method NSCT and fourth column shows the fused result using proposed Hybrid method. 5.2 Evaluation Index System 5.2.1 Structural Similarity Based Metrix: It is designed by modeling any image distortion as the combination of loss of correlation and contrast distortion. 3075

High value of NCC provides useful information for diagnosis purpose. 5.2.2 Mean Square Error The mathematical equation of MSE is given by the following equation, 5.2.5 Maximum Difference Difference of pixels between original and fused image Low value of MSE provides better information. Low MD value provides better image details. 5.2.3 Peak Signal to Noise Ratio Ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. Table 5.1: Performance Evaluation Table The bar charts of SSIM, MSE, PSNR, NCC and MD are based on the measured values of five datasets shown in table. From the bar charts and table it is clear that the performance of the proposed Hybrid method provide better results than the previous NSCT method. High PSNR value show better quality of image. 5.2.4 Normalized Cross Correlation NCC are used to find out similarities between fused image and original image 5. CONCLUSION In this paper, an image fusion framework is proposed for multi-modal medical images, which is based on Three Level Fusion Method. For this fusion, Centralization method is used by which more information can be preserved in the fused image with improved quality. The low frequency bands are fused by considering phase congruency whereas directive contrast is adopted as the fusion measurement for 3076

high-frequency bands. The visual and statistical comparisons demonstrate that the proposed algorithm can enhance the details of the fused image, and can improve the visual effect with much less information distortion than its competitors. Further, in order to show the practical applicability of the proposed method, three clinical example are also considered which includes analysis of diseased person s brain with alzheimer, subacute stroke and recurrent tumor. 6. REFERENCES [1] Kirankumar Y., Shenbaga Devi S. -Transform-based medical image fusion, Int.J. Biomedical Engineering and Technology, Vol. 1, No. 1, 2007 101. [2] SABARI.BANU, R. (2011), Medical Image Fusion by the analysis of Pixel Level Multi-sensor Using Discrete Wavelet Transform, Proceedings of the National Conference on Emerging Trends in Computing Science, pp.291-297. [3] Nupur Singh, Pinky Tanwar (2012), Image Fusion Using Improved Contourlet Transform Technique, IJRTE Volume-1, Issue-2. [4] Tu, Su, Shyu, Huang (2001) Efficient intensity-hue saturation-based image fusion with saturation compensation, Optical Engineering, Vol. 40 No. 5. [5] Zheng, Essock, Hansen, An Advanced Image Fusion Algorithm Based on Wavelet Transform Incorporation with PCA and Morphological Processing. [6] GuihongQu, Dali Zhang, 2001, Medical image fusion by wavelet transform modulus maxima. [7] LigiaChiorean, Mircea-Florin Vaida, 2008 Medical Image Fusion Based on Discrete Wavelet Transform. [8] F. E. Ali, I. M. El-Dokany, A. A. Saad, 2008, Curvelet Fusion of MR and CT Images Progress In Electromagnetic Research C, Vol.3, 215-224, 2008. [9] S. Rajkumar, S. Kavitha, Redundancy Discrete Wavelet Transform and Contourlet Transform for Multimodality Medical Image Fusion with Quantitative Analysis, 3rd International Conference on Emerging Trends in Engineering and Technology, November 2010. [10] M. Chowdhury, and M. K. Kundu, 2011, Medical Image Fusion Based on Ripplet Transform Type-I, Progress in Electromagnetic Research B, Vol.30, 355-370, 2011. [11] T. Li, Y. Wang, Biological image fusion using a NSCT based variable-weight method, Information Fusion 12 (2) (2011)85 92. 3077