Multimodal Image Fusion Of The Human Brain

Similar documents
Medical Image Analysis

Introduction to Neuroimaging Janaina Mourao-Miranda

Image Registration. Prof. Dr. Lucas Ferrari de Oliveira UFPR Informatics Department

Advanced Visual Medicine: Techniques for Visual Exploration & Analysis

REGISTRATION AND NORMALIZATION OF MRI/PET IMAGES 1. INTRODUCTION

Annales UMCS Informatica AI 1 (2003) UMCS. Registration of CT and MRI brain images. Karol Kuczyński, Paweł Mikołajczak

3D Surface Reconstruction of the Brain based on Level Set Method

Qualitative Comparison of Conventional and Oblique MRI for Detection of Herniated Spinal Discs

A Study of Medical Image Analysis System

Neuroimaging and mathematical modelling Lesson 2: Voxel Based Morphometry

Computational Medical Imaging Analysis

Methods for data preprocessing

Whole Body MRI Intensity Standardization

MEDICAL IMAGE ANALYSIS

Visualisation : Lecture 1. So what is visualisation? Visualisation

Automatic segmentation of the cortical grey and white matter in MRI using a Region Growing approach based on anatomical knowledge

Medical Imaging Introduction

3D Voxel-Based Volumetric Image Registration with Volume-View Guidance

3D graphics, raster and colors CS312 Fall 2010

The Anatomical Equivalence Class Formulation and its Application to Shape-based Computational Neuroanatomy

Computational Medical Imaging Analysis Chapter 4: Image Visualization

RADIOMICS: potential role in the clinics and challenges

Hybrid Approach for MRI Human Head Scans Classification using HTT based SFTA Texture Feature Extraction Technique

Medical Image Registration by Maximization of Mutual Information

Fmri Spatial Processing

Biomedical Image Processing

Non-Rigid Multimodal Medical Image Registration using Optical Flow and Gradient Orientation

PROSTATE CANCER DETECTION USING LABEL IMAGE CONSTRAINED MULTIATLAS SELECTION

Basic principles of MR image analysis. Basic principles of MR image analysis. Basic principles of MR image analysis

Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques

Object Identification in Ultrasound Scans

Clinical Prospects and Technological Challenges for Multimodality Imaging Applications in Radiotherapy Treatment Planning

SPM8 for Basic and Clinical Investigators. Preprocessing. fmri Preprocessing

Pathology Hinting as the Combination of Automatic Segmentation with a Statistical Shape Model

Abstract. 1. Introduction

Elastic registration of medical images using finite element meshes

Medical Computer Vision

A Multiple-Layer Flexible Mesh Template Matching Method for Nonrigid Registration between a Pelvis Model and CT Images

Atlas Based Segmentation of the prostate in MR images

PET AND MRI BRAIN IMAGE FUSION USING REDUNDANT WAVELET TRANSFORM

Basic fmri Design and Analysis. Preprocessing

NIH Public Access Author Manuscript Proc Soc Photo Opt Instrum Eng. Author manuscript; available in PMC 2014 October 07.

Medical Images Analysis and Processing

Interactive Boundary Detection for Automatic Definition of 2D Opacity Transfer Function

Hierarchical Multi structure Segmentation Guided by Anatomical Correlations

Functional MRI in Clinical Research and Practice Preprocessing

Deformable Registration Using Scale Space Keypoints

Computer-Aided Diagnosis in Abdominal and Cardiac Radiology Using Neural Networks

Supplementary methods

Leksell SurgiPlan Overview. Powerful planning for surgical success

STIC AmSud Project. Graph cut based segmentation of cardiac ventricles in MRI: a shape-prior based approach

Reconstruction in CT and relation to other imaging modalities

CHAPTER 2. Morphometry on rodent brains. A.E.H. Scheenstra J. Dijkstra L. van der Weerd

ADAPTIVE GRAPH CUTS WITH TISSUE PRIORS FOR BRAIN MRI SEGMENTATION

Medical Image Registration

MR IMAGE SEGMENTATION

Image Formation. Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico

Automatic Subthalamic Nucleus Targeting for Deep Brain Stimulation. A Validation Study

A Generic Lie Group Model for Computer Vision

Classification of Subject Motion for Improved Reconstruction of Dynamic Magnetic Resonance Imaging

CHAPTER-1 INTRODUCTION

Statistical Analysis of Neuroimaging Data. Phebe Kemmer BIOS 516 Sept 24, 2015

Methodological progress in image registration for ventilation estimation, segmentation propagation and multi-modal fusion

Machine Learning for Medical Image Analysis. A. Criminisi

Biomedical Image Analysis based on Computational Registration Methods. João Manuel R. S. Tavares

Available Online through

Interoperability Issues in Image Registration and ROI Generation

Fusion of Multi-Modality Volumetric Medical Imagery

EPI Data Are Acquired Serially. EPI Data Are Acquired Serially 10/23/2011. Functional Connectivity Preprocessing. fmri Preprocessing

Introduction to Medical Image Processing

Copyright 2009 Society of Photo Optical Instrumentation Engineers. This paper was published in Proceedings of SPIE, vol. 7260, Medical Imaging 2009:

Pathology Hinting as the Combination of Automatic Segmentation with a Statistical Shape Model

A NEURAL NETWORK BASED IMAGING SYSTEM FOR fmri ANALYSIS IMPLEMENTING WAVELET METHOD

Filtering Images. Contents

Adaptive Local Multi-Atlas Segmentation: Application to Heart Segmentation in Chest CT Scans

Fast and effective characterization of 3D Region of Interest in medical image data

Using Probability Maps for Multi organ Automatic Segmentation

Utilizing Salient Region Features for 3D Multi-Modality Medical Image Registration

CS 556: Computer Vision. Lecture 18

Introduction to Computer Graphics with WebGL

SIGMI Meeting ~Image Fusion~ Computer Graphics and Visualization Lab Image System Lab

The Insight Toolkit. Image Registration Algorithms & Frameworks

A New Application for Displaying and Fusing Multimodal Data Sets

TUMOR DETECTION IN MRI IMAGES

Spatio-Temporal Registration of Biomedical Images by Computational Methods

Automatic Generation of Training Data for Brain Tissue Classification from MRI

Multi-atlas labeling with population-specific template and non-local patch-based label fusion

CS 5630/6630 Scientific Visualization. Volume Rendering I: Overview

Normalization for clinical data

Multi-Atlas Segmentation of the Cardiac MR Right Ventricle

Improvement of contrast using reconstruction of 3D Image by PET /CT combination system

SPM Introduction. SPM : Overview. SPM: Preprocessing SPM! SPM: Preprocessing. Scott Peltier. FMRI Laboratory University of Michigan

Fusion of Multimodality Medical Images Using Combined Activity Level Measurement and Contourlet Transform

MULTIMODAL MEDICAL IMAGE FUSION BASED ON HYBRID FUSION METHOD

Game Programming. Bing-Yu Chen National Taiwan University

3D VISUALIZATION OF SEGMENTED CRUCIATE LIGAMENTS 1. INTRODUCTION

Image Acquisition Systems

CS/NEUR125 Brains, Minds, and Machines. Due: Wednesday, April 5

New Technology Allows Multiple Image Contrasts in a Single Scan

Multimodal Elastic Image Matching

Transcription:

Multimodal Image Fusion Of The Human Brain Isis Lázaro(1), Jorge Marquez(1), Juan Ortiz(2), Fernando Barrios(2) isislazaro@gmail.com Centro de Ciencias Aplicadas y Desarrollo Tecnológico, UNAM Circuito Ext. S/N, Cd. Universitaria CP 04510, México, DF, Mexico Instituto Nacional de Neurología, UNAM, Juriquilla, Querétaro, México Abstract We present techniques for visualizing images that combine, in a useful and meaningful way, the information present in 3D volumes of two medical imaging modalities: anatomical, via MRI (Magnetic Resonance Imaging) and functional information, from PET (Positron Emission Tomography) or functional MRI (fmri). Classical visualization of image fusion often use fixed blending or RGB channel combinations of two or three image modalities, assuming the images are in registration. These tools do not consider the anatomical and functional characteristics of the images. We take advantage of the HSV (Hue, Saturation and Value) color space to enhance anatomical and functional features of the human brain in MRI and PET or fmri sets. To this end we tested gradient operators, border filters and spatial-domain filters to enhance low spatial variations in functional information, where contours are highly blurred, compared to anatomical modalities. We describe a choice of the tested methods and show images of current results. I. Introduction For an accurate interpretation of the information in functional images from the brain, physicians need to know the location of specific activations by using anatomical images. Image fusion refers to several techniques that combine intensity, details and complementary features of two or more imaging modalities, to improve the visualization and interpretation, allowing the extraction of information that would not be visible or clear within each separate modality. For these reasons, our objective is to obtain images with as much useful data as possible, preserving the identity of the modalities involved (function and anatomical location, in our case), using perceptually clear and flexible fusion methods to facilitate medical diagnosis. In other kind of problems, image fusion can also be used to combine information partially present in each modality, or from several images of the same modality, or even combining the output of different filters or image processing methods. In these cases, the identity of each component is often lost. In this work we report results obtained by fusing PET (I PET ) and MRI (I MRI ) images of the human brain, for the visualization of neural activations and their exact location. We tested methods such as interactive alpha blending that allows gradual blending of different color channels, grayscale for the anatomical modality and pseudo color for the functional modality. The color space approach, using the HSV (Hue, Saturation and Value) model preserves the information of both images, since the gray-level intensity (Value) is adequate to show well-defined edges from I MRI and the modulating Hue component, defined by I PET, is ideal to display functional features, and does not interfere with the perception of anatomical details. Saturation may be held constant or combine information from either channel, or both. Finally, an overlapping minimization technique was tested, using operators for gradient and contour extraction in I PET and I MRI, in order to enhance the fusion results. II. Materials and Methods We employed MRI brain images of normal children of eight years. Several sets were obtained on a 1

General Electric 3.0 Tesla MR750 (Milwaukee, Wisconsin USA). Volumetric images were obtained with the technique of FSPGR with frequency FOV = 24, Phase FOV = 10.75, Slice Thickness = 1.3, Flip Angle = 12, Freq = 320, Phase = 320, NEX = 1, and TR = 8.2 TE = Full min, with a total of 144 axial slices 1.3 thick. Moreover, functional images (fmri) were obtained with a hand movement paradigm and parameters: FOV = 24.6 Freq, Phase FOV = 1.00, Slice Thickness = 3, TR = 3000, TE = 40, Flip Angle = 90, Freq = 64, Phase = 64 and NEX = 1, with a total of 32 images. Image resolution for both sets is 512 512. The MRI scanner is at the the Neuro-Biology Institute of the UNAM (National Autonomous University of Mexico). We also employed a functional image set from the PET scanner (from Philips), with similar settings, but a lower resolution of 64 64 64, of the School of Medicine of the UNAM. The datasets where registered (that is, geometrically aligned and scaled according to the standard Atlas of Talairach) using Statistical Parametric Mapping (SPM), a MATLAB software package implementing statistical analysis, transformations and some image processing specific for neuroimaging data. [1]. A. Image and Data Fusion Data fusion consists in techniques for combining information originating from several sources in order to improve or obtain additional information [2-4]. Depending on the type of data to be fused, the objectives, methods and terminology, can vary greatly and need to be precisely chosen. In the case of image processing, fusion is the process of combining relevant information of multiple images of a scene, into a single image, more informative than each one of the input images. In medical image, the integration to get extra information is useful for taking advantage of a multimodality approach and help the detection, diagnosis and management of certain diseases. The different medical image modalities can be classified in two main categories: anatomical and functional. The first one shows the morphology, structure location and shape features; such modalities include X-ray, MRI (magnetic resonance image) or CT (computed tomography). The second category of medical-imaging modalities gives information about the metabolism and the locus of physiological responses to stimuli and specific activations. Such category includes PET (positron emission tomography), SPECT (single photon emission computed tomography) or fmri (functional MRI). Nowadays, in a clinical environment it is common to take several images from the same patient; it can be the same type of study or from different modalities, either to enhance the final result or to know the anatomical localization to interpret the functional information correctly; therefore to increase the understanding of the available information it is necessary a method to display simultaneously two or more images, coming often from different medical imaging modalities. Our main objective is to develop visualization techniques of image fusion that combines the complementary and contrasted features of both images into a single one, in order to obtain more medical information and help the physicians to extract characteristics that usually are not visible in each individual studio, or when localization is uncertain, with respect to an anatomical feature. We have noted that the complexity of the data for each modality makes it impossible to implement a unique comprehensive method to combine and exploit all the characteristics of the images; so it is important to have a clear idea about the objectives and applications in mind. We have found different contexts and interpretations that make confusing the meaning of terms associated with image fusion. For that reason, we adopt a division of the objectives of image fusion to facilitate the understanding of the best method, according to specific goals. These objectives are: 1) To combine information of data generally incomplete from different sources (sensors). The identity of each data source is usually lost, since a single restored image is formed (fusion 2

output). An example is obtaining an extended depth-of field image, from several images with short depth of field, at different in-focus loci. Another example is to obtain contours of a feature from different modalities or different images under different conditions. 2) Simultaneous display of two or more image components (signals or datasets) without losing the identity of each image component. This is the topic of the present work and is highly related with Scientific Visualization problems, where interpretation is enhanced by several computer-graphics techniques, color and visual cues. 3) Visualization in order to compare two datasets and evaluating a processing method too. Such method can be a registration algorithm, considering deformation, or the output of two segmentations techniques, to be compared against a golden standard. Many papers report registration techniques where validation is performed by simultaneously visualizing both images; a number of fusion techniques ad hoc are employed [2-5]. 4) Combining different models, algorithms, designs, strategies, paradigms, classifiers, interpretations and other abstract information. In this case, data fusion is generalized to other entities and include not only raw or processed data, but representations, methods, descriptions, etc., coming from the lowest to the highest-level stages in any task from Computer Vision and other disciplines. In this work we focus on Objective 2, which represents a timely need in Mexican hospitals: the interpretation, by a neurologist, of the simultaneous display of anatomical and functional information, preserving the identity of brain activity and structural features in order to locate activity in a particular region. B. Image-Fusion Operators. a. Alpha Blending Before any fusion was performed, the images were aligned (registered) and normalized using the software SPM, which uses a standard atlas of the brain, in the Talairach frame of reference [6]. A perceptual combination of two registered images, I MRI and I PET is the alpha blending, with a parameter, ranging from 0 to 1, is the linear interpolation: (1- α) I MRI + α I PET (1) The problem with such combination, for visualization purposes, is that the modality identity was lost. A solution is to show each modality in a different color channel or combination of colors (red and cyan, for example), or one is shown in a conventional gray-scale (all three RGB components have the same gray intensity), while the other occupies only one color channel, usually green. If the MRI information is displayed in gray intensities (all RGB channels)) and the PET in the green chanel, equation (1) is more correctly written, in color space components (R, G, B) as: ( (1- α) I MRI, ( (1- α) I MRI + α I PET ), (1- α) I MRI ) (2) A red-blue display of alpha blending, with red for I MRI and green for I PET, works perceptually as grayscale blending, since color channels still combine. It is written as: ( (1- α) I MRI, 0, α I PET ) (3) A cyan scale implies displaying the same information in the G and B color channels. Figure 1 show a sequence of components, by using fmri images of the same subject, as functional modality. We 3

have already reported similar results for MRI and PET in [7]. It is also shown the fusion with alpha blending, using different values of alpha and different color-channel selections. Image 1 Slice: 21 alpha = 0.5 Figure 1. (See the figure caption in the next page for an explanation). Image 2 Slice: 21 alpha = 0 Image 3 Slice: 21 alpha = 0.5 4

Image 4 Slice: 21 alpha = 1 Image 5 Slice: 21 alpha = 0.5 Image 6: Slice: 21 sigma = 0.5; Figure 1. Image 1 and image 2 are the MRI and fmri images to be fused, images 3 and 4 show alpha-blending in a gray-green display, image 5 uses red and cyan channels and image 6 shows a Gaussian smoothing. The operators are performed over the entire volume dataset. b. Minimum Overlapping As noted, anatomical features in MRI and CT images present much finer details than functional images (fmri, or PET). The first can be enhanced by using the Laplacian and gradient filters while 5

the second, by regional-enhancing operators, which further blur and smooth edges and contours. This combination allows a display where both modalities are relatively separated. A way to properly visualize both images, with a clear identification of each modality is to use color channels not from the RGB color space, but from the Hue, Saturation and Value (HSV) color space. By assigning functional information (I fmri )to the Hue channel and the anatomical image to the Value channel, while holding constant the Saturation channel, at medium gray intensity (an image I 128 with uniform intensity 128, assuming a scale from 0 to 255), a clear distinction is achieved. We further tested other combinations, finding optimal the following: I ( I, I, I ) ( I, I, ( I I )) (4) 1 HSV Hue Sat Val fmri MRI 2 MRI fmri An interpretation improvement of low-to-high activation required to invert the intensity values of I fmri by applying a cold-to-hot hue scale,. A problem is the cycling of the red and purple hues, so, instead of taking the full range of hue, this was further limited to start in blue (zero and low intensities) ending in red for the highest. Furthermore, the color scale begins in black, preserving the original background. Images in figure 2 illustrate the result, for the same slice of the brain. Figure 2. (Left) The sidebar scale shows the hue progression when using I HSV = (I fmri, 256, I MRI ); (center) when the functional modality I fmri is used as Value, low activation is darkened and a changing sidebar should be used for interpretation. (Right) an average of I fmr and I MRI gives a better result, preserving the Hue which modulates anatomical features. When no activity is present, anatomy appears as gray intensities. A final parameter allows to display only an activity range of interest (ARI), coloring a selected window of activation. Small changes can be distinguished by re-scaling the ARI to full color scale. Numbers in proper units should be displayed in the sidebar, to interpret the hue variations. c. Contour displaying Anatomical features such as specific brain lobes and cortex lobules are difficult to identify in a grayscale image, especially when color is added to see simultaneously the functional activation in that area. Region contours can be extracted and overlaid to enhance the overall identification and association of activity with such regions. The set of all contours I MRI may be extracted by simple techniques, such as the gradient operator, as we detail in [7], or they may come from an atlas, in geometric registration with the brain under study, or they may be drawn manually, by an expert. Hue, Saturation and Value channel combinations may incorporate the contour mask as follows, where now we use I PET images as the functional modality: 6

I ( I, I ( I )( I I ), ( I I )) (5) 1 1 HSV PET 128 4 MRI MRI PET 2 MRI PET where I MRI is 1 at the contour pixels, and 0 elsewhere, while I 128 is a constant-value image. Note that contours will appear slightly colored, without interfering with functional information and blending with non-contour pixels. Figure 3 shows an example, using MRI and PET images. FIGURE 3. A functional (mainly hue) PET image combined with an MRI image (shades in saturation and value channels) with overlaid contours, lightly colored by the PET information. II. Discussion and Conclusions We presented current work on designing and visual evaluation of different operators and fusion combinations for displaying together two medical-imaging modalities. The criteria to achieve this has been very simple, in the special case of functional and anatomical information: on one hand, gray-scale intensities were used to mainly display anatomical data, whose edges and contours are either enhanced or extracted to be superposed on the fusion, then, on the other hand, color has been used as modulation, mainly in the hue channel, for the functional information, which has been further blurred, since it does not present edges or features. In this way, both modalities have been displayed simultaneously, achieving a clear distinction of each one, while showing the location of activity in specific, known zones of the brain. The contours may come also from an Atlas, or from 7

an expert, traced by hand, and our software is able to adjust the range of activation to be visualized, as well as other parameters, to better control the look-up table color scales, for example, to map the full output dynamic range for a small input range. Evaluation by clinicians is under way and we are also working on the three-dimensional volume rendering and surface rendering of brain images with coloring from the fusion process. Acknowledgment We thank the PET/CT Cyclotron Unit of the School of Medicine of the UNAM for assistance, evaluation and providing PET images, and Dr. Fernando Barrios, Juan Ortiz and Erick Pasaye from the Institute of Neurobiology of the UNAM for assistance and providing MRI and fmri images for this work. IV Bibliography [1] - Statistical Parametric Mapping Software website from the Wellcome Trust Centre for Neuroimaging, UK, 2010. http://www.fil.ion.ucl.ac.uk/spm/ [2] M. van Herk, J.C. de Munck, J.V. Lebesque, S. Muller, C. Rasch, and A. Touw, Automatic registration of pelvic computed tomography data and magnetic resonance scans including a full circle method for quantitative accuracy evaluation, Medical Physics 25, 2054-2067 (1998). [3] J.B.West and J.M. Fitzpatrick, Point-based Rigid Registration: Clinical Validation of Theory, Medical Imaging 2000: Image Processing, K.M. Hanson, Ed. Proceedings of SPIE 3979, pp. 353-359 (2000). [4] C. Studholme, D.L.G. Hill, D.J. Hawkes, Automated 3D Registration of MR and CT Images of the Head, Medical Image Analysis 1, 163-175 (1996). [5] J.B.A. Maintz and M.A. Viergever, A Survey of Medical Image Registration, Medical Image Analysis 2, 1-36 (1998). [6] K.J. Friston. Statistical Parametric Mapping and Other Analysis of Functional Imaging Data. In Brain Mapping: The Methods, pages 363-385. Academic Press, (1996). [7] Márquez Jorge, Gastelum Alfonso and Padilla Miguel Angel, Image-Fusion Operators for 3D Anatomical and Functional Analysis of the Brain, work 1661, 29th International Conference of the IEEE Engineering in Medicine and Biology Society, Lyon, France, 22-26 August, 2007, pp. 833 835. 8