ECSE 626 Project Report Multimodality Image Registration by Maximization of Mutual Information

Similar documents
Image Registration. Prof. Dr. Lucas Ferrari de Oliveira UFPR Informatics Department

Multi-modal Image Registration Using the Generalized Survival Exponential Entropy

A Model-Independent, Multi-Image Approach to MR Inhomogeneity Correction

Chapter 3 Set Redundancy in Magnetic Resonance Brain Images

Supplementary methods

Fmri Spatial Processing

Modern Medical Image Analysis 8DC00 Exam

MR IMAGE SEGMENTATION

SPM8 for Basic and Clinical Investigators. Preprocessing. fmri Preprocessing

Image Segmentation and Registration

Whole Body MRI Intensity Standardization

The organization of the human cerebral cortex estimated by intrinsic functional connectivity

Histograms. h(r k ) = n k. p(r k )= n k /NM. Histogram: number of times intensity level rk appears in the image

Intraoperative Prostate Tracking with Slice-to-Volume Registration in MR

Functional MRI in Clinical Research and Practice Preprocessing

Basic fmri Design and Analysis. Preprocessing

Automatic MS Lesion Segmentation by Outlier Detection and Information Theoretic Region Partitioning Release 0.00

Multi-Modal Volume Registration Using Joint Intensity Distributions

SPM8 for Basic and Clinical Investigators. Preprocessing

Medical Image Segmentation Based on Mutual Information Maximization

EPI Data Are Acquired Serially. EPI Data Are Acquired Serially 10/23/2011. Functional Connectivity Preprocessing. fmri Preprocessing

The Insight Toolkit. Image Registration Algorithms & Frameworks

Automatic Subthalamic Nucleus Targeting for Deep Brain Stimulation. A Validation Study

White Pixel Artifact. Caused by a noise spike during acquisition Spike in K-space <--> sinusoid in image space

RIGID IMAGE REGISTRATION

Motivation. Gray Levels

Fast Image Registration via Joint Gradient Maximization: Application to Multi-Modal Data

Nonrigid Registration using Free-Form Deformations

Digital Volume Correlation for Materials Characterization

Department of ECE, SCSVMV University, Kanchipuram

ADAPTIVE GRAPH CUTS WITH TISSUE PRIORS FOR BRAIN MRI SEGMENTATION

PET Image Reconstruction using Anatomical Information through Mutual Information Based Priors

82 REGISTRATION OF RETINOGRAPHIES

1 Introduction Motivation and Aims Functional Imaging Computational Neuroanatomy... 12

Lucy Phantom MR Grid Evaluation

Classification of Subject Motion for Improved Reconstruction of Dynamic Magnetic Resonance Imaging

Obtaining Feature Correspondences

Image Registration I

A Study of Medical Image Analysis System

SIFT - scale-invariant feature transform Konrad Schindler

Deformable Registration Using Scale Space Keypoints

Norbert Schuff VA Medical Center and UCSF

Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model

Performance Evaluation of the TINA Medical Image Segmentation Algorithm on Brainweb Simulated Images

MEDICAL IMAGE COMPUTING (CAP 5937) LECTURE 4: Pre-Processing Medical Images (II)

Robust Realignment of fmri Time Series Data

THE geometric alignment or registration of multimodality

Medical Image Registration by Maximization of Mutual Information

Spatio-Temporal Registration of Biomedical Images by Computational Methods

Registration Techniques

Non linear Registration of Pre and Intraoperative Volume Data Based On Piecewise Linear Transformations

Efficient population registration of 3D data

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

Introduction to Medical Image Registration

2D Rigid Registration of MR Scans using the 1d Binary Projections

Annales UMCS Informatica AI 1 (2003) UMCS. Registration of CT and MRI brain images. Karol Kuczyński, Paweł Mikołajczak

Slide 1. Technical Aspects of Quality Control in Magnetic Resonance Imaging. Slide 2. Annual Compliance Testing. of MRI Systems.

CT NOISE POWER SPECTRUM FOR FILTERED BACKPROJECTION AND ITERATIVE RECONSTRUCTION

Image Registration + Other Stuff

Align3_TP Manual. J. Anthony Parker, MD PhD Beth Israel Deaconess Medical Center Boston, MA Revised: 26 November 2004

Mutual Information Based Methods to Localize Image Registration

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Rigid and Deformable Vasculature-to-Image Registration : a Hierarchical Approach

A Generic Lie Group Model for Computer Vision

Computational Neuroanatomy

Motivation. Intensity Levels

Respiratory Motion Estimation using a 3D Diaphragm Model

Basic principles of MR image analysis. Basic principles of MR image analysis. Basic principles of MR image analysis

Methodological progress in image registration for ventilation estimation, segmentation propagation and multi-modal fusion

A Pyramid Approach For Multimodality Image Registration Based On Mutual Information

Knowledge-Based Segmentation of Brain MRI Scans Using the Insight Toolkit

This Time. fmri Data analysis

Surface-based Analysis: Inter-subject Registration and Smoothing

Biomedical Imaging Registration Trends and Applications. Francisco P. M. Oliveira, João Manuel R. S. Tavares

Interactive Boundary Detection for Automatic Definition of 2D Opacity Transfer Function

Lecture 13 Theory of Registration. ch. 10 of Insight into Images edited by Terry Yoo, et al. Spring (CMU RI) : BioE 2630 (Pitt)

Normalization for clinical data

Inter-slice Reconstruction of MRI Image Using One Dimensional Signal Interpolation

TUMOR DETECTION IN MRI IMAGES

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

Computer Vision I - Basics of Image Processing Part 1

Implementation of Advanced Image Guided Radiation Therapy

Ensemble registration: Combining groupwise registration and segmentation

Comprehensive Approach for Correction of Motion and Distortion in Diffusion-Weighted MRI

Automatic Generation of Training Data for Brain Tissue Classification from MRI

Introduction to fmri. Pre-processing

Critique: Efficient Iris Recognition by Characterizing Key Local Variations

Correction of Partial Volume Effects in Arterial Spin Labeling MRI

Biomedical Image Processing

Automatic segmentation of the cortical grey and white matter in MRI using a Region Growing approach based on anatomical knowledge

William Yang Group 14 Mentor: Dr. Rogerio Richa Visual Tracking of Surgical Tools in Retinal Surgery using Particle Filtering

EE795: Computer Vision and Intelligent Systems

3-D Compounding of B-Scan Ultrasound Images

Experiments with Edge Detection using One-dimensional Surface Fitting

CHAPTER 9: Magnetic Susceptibility Effects in High Field MRI

3D Voxel-Based Volumetric Image Registration with Volume-View Guidance

Volume visualization. Volume visualization. Volume visualization methods. Sources of volume visualization. Sources of volume visualization

An Introduction To Automatic Tissue Classification Of Brain MRI. Colm Elliott Mar 2014

Knowledge-Based Segmentation of Brain MRI Scans Using the Insight Toolkit

MEDICAL IMAGE ANALYSIS

Transcription:

ECSE 626 Project Report Multimodality Image Registration by Maximization of Mutual Information Emmanuel Piuze McGill University Montreal, Qc, Canada. epiuze@cim.mcgill.ca Abstract In 1997, Maes et al. [5] was one of the first papers to propose mutual information as a similarity metric for performing multimodal image registration. More than ten years later, mutual information is now found at the core of many state-of-the-art image registration algorithms [2]. The aim of this project is to demonstrate its simplicity, accuracy, and robustness, by implementing the original algorithm proposed by Maes et al. 1. Introduction From the first x-ray image of a hand taken in 1890 s, and after the first magnetic resonance image obtained in 1973, the literature on medical imaging techniques has extended considerably. Developments in this field have been driven by the search for the quintessential imaging modality: a noninvasive imaging technique allowing both high temporal and spatial resolution, having flexible structural contrast and free of any physical image degradation. Whereas significant research and improvements were made in recent years and powerful imaging techniques were designed, as of today this quest is still ongoing. [1] The main issue preventing current state-of-the-art imaging modalities to attain such gold standards is their structural specificity. All imaging techniques behave poorly under certain conditions, and all of them cannot differentiate certain tissue classes. There is a strong need for a general purpose biomedical scanner; meanwhile, the information obtained in the conjunction of images acquired using different imaging modalities can be complementary and reveal structural information that cannot be seen in individual images alone. However, the combination of multimodal images is in many cases non-trivial to perform as images acquired using different imaging techniques can have different orientations, volume sizes, color contrast and intensity levels, acquisition artifacts and noise distributions. In the case of misaligned multimodal images, one first needs to map them to a common coordinate system in order to combine them, a process known as image registration. Alike imaging techniques, image registration has been a hot research topic for many years, and the field has generated a profusion of different methods [2]. The last decade has seen a new generation of algorithms based on the principle of mutual information, an information theoretic measure of the relationship between random variables. In 1997, Maes et al. [5] was one of the first papers to use mutual information () in the context of medical multimodal image registration. Independently, during the same year, Viola and Wells [6] developed a multimodality image registration algorithm using mutual information. [8] The aim of this project was to implement and test the multimodal mutual information image registration algorithm discussed in the Maes et al. paper. Section 1.1 discusses the principles of medical image registration. Section 2 presents the algorithm and the datasets on which it was tested, and the effects of partial overlap and different kinds of image degradation on the robustness of the algorithm. Section 3 discusses the results and their implications. 1.1. Image Registration The image registration process consists of transforming images in order to align them. More specifically, an image R is defined as the reference image, and determines the coordinate system to which other floating images F are transformed. The goal of the registration process is to determine the optimal registration parameter α that maximizes some measure M of the alignment of F with respect to R. Image registration is an iterative process where four main steps are repeated until convergence or stabilization: an image transformation step, an interpolation step, a similarity measure step, and an optimization step. This multistep procedure cycles when the optimizer finds the next registration parameter α to use for the image transformation, and terminates when the optimizer reaches a global or local optima. The image registration pipeline is shown on Figure 1 and explained in the following subsections.

Registration Flow Diagram α =(φ x, φ y, φ z,t x,t y,t z ) Initial conditions 1.1.4 Mutual Information F,!0 Affine registration! Optimization!* T!F M(R, F ) Interpolation F Similarity metric M The mutual information I(F, R) between two images R and F is related to their entropy H: I(F, R) = H(F) + H(R) H(F, R) (3) where the entropy of an image X is given by: Solution Figure 1: Image registration pipeline. 1.1.1 Image Transformation Given an N-voxel image F=(I 1,..., I N ) where I j is the intensity of a voxel p j, and a rigid transformation T α, where α = (φ x, φ y, φ z, t x, t y, t z ) is the registration parameter, the transformed image F α is given by F α = T α F (1) where each voxel j is transformed by T α : [ ] p Rx (φ j = x )R y (φ y )R z (φ z ) t p 0 1 j (2) where R x (φ x ), R y (φ y ), R z (φ z ) are the rotation matrices about the x, y, z axes respectively, and t = (t x, t y, t z ) is the translational component of the rigid transformation. These six parameters fully define the registration parameter α. 1.1.2 Interpolation The transformed voxels of the image F α do not necessarily fall on the same voxel grid as the reference image. Image interpolation is thus required in order to properly assign intensity values to this voxel grid. Many different methods exist for performing image interpolation. Nearest neighbor (NN) interpolation sets the intensity value of the transformed voxel p j to the closest voxel grid point. Trilinear (TRI) interpolation distributes a distance-weighted intensity value of p j amongst its neighbors. Trilinear partial volume distribution (PV) interpolation uses the same intensity weighting as TRI, but instead of defining new intensity values in the image F α, it directly distributes them in the joint histogram of the images R and F α. For this project, the PV interpolation method was used. 1.1.3 Joint Histogram The joint histogram h(f, R) of two images F and R is built by going through all their corresponding voxel pairs and identifying histogram coordinates (i, j) = (F(x, y, z), R(x, y, z)) such that h(i, j) is incremented by one (or a weight) for each pair. Figure 2 shows the joint histogram for T1 and T2-weighted images. As discussed in Haacke et al. [1], the bright region represents gray matter. H(X) = x i X p(x i )log 2 (x i ) (4) and p(x) is the probability mass function of X. The marginal and joint probability distributions of F and R can be found from their joint histogram h(f, R) p F R (f, r) = 1.1.5 Optimization p F (f) = r p R (r) = f h(f, r) f,r h(f, r) (5) h(f, r) (6) h(f, r) (7) In Maes et al., Powell s multidimensional direction set method is used to minimize the negative mutual information I α obtained with the registration parameter α. Powell s algorithm applies a series of one dimensional minimizations in the registration parameter α = (φ x, φ y, φ z, t x, t y, t z ). The algorithms terminates when no solutions with a significantly lower cost than the previous one can be found, as controlled via a tolerance factor [2]. 2. Experiments 2.1. Datasets In Maes et al., the algorithm was tested on four datasets. The first contained high-resolution MR and CT images that were also registered manually by a physician to provide a gold standard for comparing the results of the algorithm. The images of the second dataset were smoothed and subsampled versions of those from the first one. The third contained MR, CT, and PET images, also registered manually by an expert. The last dataset contained a single image, and was used to investigate the effects of image degradation on the robustness of the algorithm. In this project, three datasets containing transversal (TR) and sagittal (SG) T1, T2 and PD-weighted MRI images were used (see Table 1). MRI records the magnetization response of hydrogen atoms exposed to a high magnetic field and these different weights refer to the type of spin parameter that is being monitored, resulting in different intensity contrasts. Datasets A and B contained perfectly aligned MR

Set Weight Axis Size Intensity A T1 TR 512 512 0-256 T2 TR 512 512 0-256 PD TR 512 512 0-256 B T1 SG 100 100 0-256 T2 SG 100 100 0-256 PD SG 100 100 0-256 C T1 TR 512 512 0-256 Table 1: Datasets used for the experiment. (a) T1 (b) T2 (c) PD T2 Joint Histogram T1 (d) Joint histogram Figure 2: Joint histogram for T1 and T2-weighted images. Figure 3: Segmentation of a T1-weighted brain MR image for gray matter (gray), white matter (white), and CSF (blue). [4] Feature T1 T2 PD White matter Bright Dark Darker Gray matter Neutral Neutral Neutral CSF Black Bright Brighter Bone Black Black Darker Table 2: Contrast of brain features for T1, T2, and PD [1]. images of T1, T2, and PD weights. Dataset A had high resolution TR images, and dataset B low resolution SG images. Our third dataset (C) consisted of a single high-resolution T1-weighted TR image, used for testing the robustness of the algorithm. As a reference for comparing the registration results, Table 2 shows the typical intensity contrast of brain features white matter, gray matter, cerebrospinal fluid (CSF) and bone for different MRI weights. Figure 2 shows examples of T1, T2 and PD weighted MRI images and a T1-T2 joint histogram. Figure 3 shows the main structure classes (right) of a T1-weighted brain image (left) [4]. 1000 900 800 700 600 500 400 300 200 100 0 2.2. Algorithm 2.2.1 Implementation The algorithm was implemented using Matlab, version R2008a, and tested on an Apple Macbook Pro Dual Core 2, 2.4 Ghz, running Mac OS X. We implemented the algorithm for the case of 2D images, in order to reduce the computational complexity of the optimization and speed up the running time. Rigid transformations were interpolated with the PV method. We estimated the image probability distributions of the image using the joint histogram h(f, R) with a bin size of 16. The registration parameter α was initialized to (φ z = t x = t y = 0). The pseudocode for the multimodal image registration algorithm is described in Algorithm 1. Data: A reference image R and a floating image F Result: The optimal registration parameter α for aligning F to R. Set the initial registration parameter α 0 ; while Powell s method has not converged do Apply the transformation T α on F; forall Voxels of T α F that fall outside R s grid do Apply PV interpolation and distribute intensity values in h using TRI weights; end Fill h with the remaining voxel intensities; Compute the mutual information I(h(F, R)); Apply Powell s method on α and I; Set α to the parameter returned by Powell s; end Algorithm 1: Multimodal Image Registration by Maximization of Mutual Information 2.3. Multimodal Registration We applied an arbitrary in-plane rotation and translation defined by α = ( φ = π 16 rads = 11.25, t = (5, 10) ) on datasets A and B, and registered them using Algorithm 1. The magnitudes of the differences α α between the solutions α obtained and the optimal parameter α are summarized in Table 4. 2.4. Image Degradation Image degradation alters the intensity distribution of the image and might potentially affect the robustness of the registration process. The effects that were considered are noise, intensity inhomogeneity, and geometric distortion. We applied translations in the x direction (-64 to 64 pixels) on the degraded images and computed their with the original image, resulting in the so-called traces. As a comparison, the traces for the original image of dataset C are shown on Figure 4.

2.5 traces for original image 1.6 traces for image with intensity inhomogeneity k=0.002 k=0.004 k=0.006 2 1.4 1.2 1.5 1 1 0.8 0.5 0.6 0 0.4 Figure 4: traces for the original dataset C image. 0.2 2.5 2 traces for noisy image σ =10 σ =100 σ =1000 Figure 6: traces for the dataset C image degraded with intensity inhomogeneity of scaling k = 0.002, 0.004, 0.006. 1.5 an altered image I : 1 log I (x, y) = log I(x, y) + log I(x, y) (8) log I(x, y) = k ( (x x c ) 2 + (y y c ) 2) (9) 0.5 We computed the traces for k values of 0.002, 0.002, 0.06 and x c = y c = 256) (Figure 6). 2.4.3 Geometric Distortion 0 Figure 5: traces for the dataset C image degraded with noise of standard deviation σ = 10, 100, 1000. 2.4.1 Noise Noise naturally occurs in any image acquisition process, due to the limitations of the optics and recording devices. Maes et al. considered white zero-mean Gaussian noise with variance of 50, 100 and 500. We tested the same noise model with a significantly higher noise variance of 10, 100 and 1000, superimposed on the original dataset C image. The traces we obtained are shown on Figure 5. 2.4.2 Intensity Inhomogeneity Intensity inhomogeneity is often encountered in MRI, usually caused by radio-frequency nonuniformity, static field inhomogeneity and patient movement [3]. In order to measure these effects, we modified the original image I of the dataset C by adding a planar quadratic inhomogeneity factor [5] centered at (x c, y c ) and of magnitude k, resulting in Geometric distortion is an unavoidable degradation effect caused by the hardware generating the magnetic field, more specifically the superconducting coils that produce it. The two major sources of geometric (or spatial) distortion in MR images are due to gradient field nonlinearities and static field inhomogeneities [7]. Geometric distortion was modeled by a quadratic nonlinear correction [5]: x = k((x x c ) 2 + (y y c ) 2 ) (10) I(x, y) = 2k(x x c ) I(x + x, y) (11) where (x c, y c ) is again the center of the image coordinate of the center of the image plane and k a distortion scaling factor. Geometric distortion was applied on the dataset C image and then the modified image was registered with the original one. We computed the traces for k values of 0.0005, 0.00075, 0.001 and x c = y c = 256) (Figure 7). 2.5. Partial Overlap Partial overlap occurs when the floating image to be registered does not have the same imaging window size as the reference image. This problem usually occurs when registering multimodal images, as certain imaging techniques

Effect of subsampling on the optimal registration parameter 20 0.8 traces for image with geometric distortion k=0.0005 k=0.00075 k=0.001 18 16 0.7 0.6 α α 0 14 12 0.5 10 0.4 8 2 4 6 8 10 12 14 16 Subsampling factor 0.3 0.2 Figure 8: Optimal registration parameter magnitude difference as a function of the subsampling factor f = f x f y. 0.1 Figure 7: traces for the dataset C image degraded with geometric distortion of scaling k = 0.0005, 0.00075, 0.001. ROI Rows φ t x t y α α Full 1 512 0.195-2.04-9.6 2.9 Top 1 171 0.196-2.06-9.51 3.0 Middle 171 340 0.196-2.07-9.56 3.0 Bottom 340-512 0.196-1.87-9.95 3.1 Table 3: Influence of partial overlap on registration robustness. require smaller window sizes to reduce the amount of tissue exposed to radiation, and when information is missing from an image slice, in 3D volumes. In these cases, as structural information is missing from either the floating or the reference image, only a portion of the whole image is considered in the computations. This effect was tested by purposely cutting out portions (bottom, middle, and top) of the T2 image of dataset A, after applying the transformation discussed in Section 1.1, and registering it with the T2- weighted image of dataset A. We proceeded with the registration and computed α α, the magnitude of the difference between the registration parameter α obtained and the transformation parameter α. The results are summarized in Table 3. 2.6. Subsampling Subsampling was used to measure the performance of the algorithm on low resolution images. We modeled subsampling by removing voxels (zero-level intensity) on predefined intervals f x and f y along the x and y directions. We used values of f x = 1, 2, 3 or 4 and f y = 1, 2, 3 or 4, independently. The results are shown on Figure 8 for α α as a function of the subsampling factor f = f x f y. Set φ t x t y α α t (s) A Reference 0.196-5 -10 T1 T2 0.197-2.10-9.90 2.90 65.1 T1 PD 0.206-2.08-9.60 2.94 74.6 T2 T1 0.195-2.04-9.60 2.99 64.3 T2 PD 0.197-2.10-9.67 2.92 70.0 PD T1 0.196-2.02-9.7 3.00 64.7 PD T2 0.197-1.97-9.69 3.05 73.3 Mean time 68.7 B Reference 0.196-5 -10 T1 T2 0.191-2.04-9.47 3.0 3.9 T1 PD 0.172-2.47-9.03 2.7 3.5 T2 T1 0.244-0.71-10.33 4.3 4.1 T2 PD 0.178-2.48-9.68 2.6 3.2 PD T1 0.195-2.14-9.78 2.9 4.5 PD T2 0.190-2.05-9.48 3.0 3.6 Mean time 3.8 Table 4: Results and running for applying the registration algorithm on datasets A and B. 3. Discussion Table 4 shows that although the algorithm performed generally well for T1 and T2 registration, there are points that need to be addressed. First, we noticed a constant translational bias along the x direction. We suspect this bias to be caused by both transformation rounding errors, and by the lack of structural variation along the symmetry axis of the transverse plane defined by the anterior commissure (AC) and posterior commissure (PC) line. This result suggests that there is more structural variability present tangentially to the AC-PC line. Moreover, PD did not perform well under translation, for both x and y directions. We suspect this result to be caused by the particular lack of contrast of PD images along the AC-PC line, and that the algorithm might have tried to match CSF to the AC-PC line since it offers more contrast along the midline. The algorithm was fairly robust to noise. The peak of maximal was not shifted on Figure 5, even for very high noise variance (σ = 1000), while the maximum peak

was significantly lower and much sharper. This result reflects the ability of mutual information to capture the structural correlation of two images. That is, noise diminished the amplitude of the correlation, and it did not contain structure. Similarly, the algorithm was robust to non-uniform and structural overlapping. Applying intensity gradients did not influence the robustness of the algorithm, as the criterion is intrinsically translational invariant to uniform intensity shifts. The registration traces shown on Figure 6 confirm this intuition, as the optimal registration parameter corresponding to the peak of maximum mutual information was not changed, although its magnitude did change, reflecting the information loss in regions where the intensity gradient reduced the dynamic range. However, adding or removing structural information from an image does affect its probability distribution, resulting in a different registration entropy. In the cases of well-behaved additive information (such as intensity inhomogeneity or partial overlap), either the modified intensities are concentrated in a region of low significance (entropy-wise) or there is enough information left to represent the full image structure, and the optimal registration parameter is not significantly affected. This can be seen for the effects of partial overlap, shown in Table 3 and of subsampling, shown on Figure 8, for which leaving out intervals actually improved the contrast of brain boundaries and resulted in a better registration. In other cases, critical regions crest lines or ridges, for example will be altered, resulting in a significant perturbation of the probability distribution of the image, thereby affecting its registration. The running time for registering dataset B was so low that subsampling could certainly be used in order to get a coarse registration parameter initialization. The algorithm performed well under the rigid body assumption, when a small geometric distortion magnitude was applied. On Figure 7, we see that the global optima of the trace is shifted proportionally to the amount of applied distortion. The location and magnitude of these global optima were expected and reflect both the intensity deformation was applied along the x direction, as imposed by (11), and the inaptitude of the rigid transformation (in this case, a translation along x) to invert the transformation applied by the geometric distortion. Also, being purely intensity-based, mutual information does not by itself consider spatial context of image pairs. Given a reference image R, there can be many floating images F that will result in the same mutual information I(F, R). This point is critical if more general image transformations are applied in the registration process, as adding more DOF lead to many more possible solutions and local optima in the registration parameter optimization stage. While geometric distortion typically has a small magnitude, even when the imaging device has not been properly calibrated [7], the rigid body assumption restricts the generality of the algorithm, and images acquired with either old-generation or slightly defective scanners might not yield good registration results. In that case, enforcing the affine transformation to use more degrees of freedom (DOF), for example by adding anisotropic scaling (9 DOF) and skew (12 DOF), would give the optimizer more flexibility in reaching the global optima. However, these added DOF would also raise the dimensionality of the registration process, and badly-suited optimization methods might get stuck in local optima and lead to erroneous registration parameters, resulting in unwanted image distortions. With all those potential optimizations, it is not surprising that many years after Maes et al. was published, there still is extensive research being done and concentrating independently on the four main registration stages shown on Figure 1. 4. Conclusion The goal of this project was to implement the original algorithm developed by Maes et al. [5]. The results were consistent with those obtained in the original paper; that is, the criterion performed generally well, and tend to be less accurate when translations were applied perpendicularly to the AC-PC axial line. Translation was identified as a potential problem for low AC-PC contrast images, as the ones acquired with PD-weighting. Geometric distortion pointed out the limitations of using rigid transformations. Finally, subsampling was identified as a potential way to get a coarse initial approximation of the registration parameter. References [1] M. E. Haacke, R. W. Brown, M. R. Thompson, and R. Venkatesan. Magnetic Resonance Imaging: Physical Principles and Sequence Design. Wiley-Liss, 1999. 1, 2, 3 [2] J. V. Hajnal. Medical Image Registration. CRC Press, Cambridge, June 2001. 1, 2 [3] Z. Hou. A review on mr image intensity inhomogeneity correction. International Journal of Biomedical Imaging, 2006(49515):11, 2006. 4 [4] K. Krishnan and J. MacFall. Location of white matter hyperintensities and alterations in white matter tracts in late onset. CAMRD Project 268, 2004. 3 [5] F. Maes, A. Collignon, D. Vandermeulen, G. Marchal, and P. Suetens. Multimodality image registration by maximization of mutual information. IEEE Transactions on Medical Imaging, 16(2):187 198, April 1997. 1, 4, 6 [6] P. Viola and W. M. Wells. Alignment by maximization of mutual information. International Journal of Computer Vision, V24(2):137 154, 1997. 1 [7] D. Wang, W. Strugnell, G. Cowin, D. M. Doddrell, and R. Slaughter. Geometric distortion in clinical mri systems: Part i: evaluation using a 3d phantom. Magnetic Resonance Imaging, 22(9):1211 1221, 2004. 4, 6 [8] B. Zitova and J. Flusser. Image registration methods: a survey. Image and Vision Computing, 21(11):977 1000, October 2003. 1