NeuroImage 59 (2012) Contents lists available at SciVerse ScienceDirect. NeuroImage. journal homepage:

Size: px
Start display at page:

Download "NeuroImage 59 (2012) Contents lists available at SciVerse ScienceDirect. NeuroImage. journal homepage:"

Transcription

1 NeuroImage 59 (01) Contents lists available at SciVerse ScienceDirect NeuroImage journal homepage: Probabilistic inference of regularisation in non-rigid registration Ivor J.A. Simpson a,b,, Julia A. Schnabel a, Adrian R. Groves b, Jesper L.R. Andersson b, Mark W. Woolrich b,c a Institute of Biomedical Engineering, Department of Engineering Science (IBME), Old Road Campus Research Building, University of Oxford, Headington, Oxford OX3 7DQ, UK b Oxford Centre for Functional MRI of the Brain (FMRIB), University of Oxford, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, Headington, Oxford, OX3 9DU, UK c Oxford Centre for Human Brain Activity (OHBA), University Department of Psychiatry, Warneford Hospital, Oxford OX3 7JX, UK article info abstract Article history: Received 18 April 011 Revised 6 August 011 Accepted September 011 Available online 10 September 011 Keywords: Registration Bayesian modelling Regularisation A long-standing issue in non-rigid image registration is the choice of the level of regularisation. Regularisation is necessary to preserve the smoothness of the registration and penalise against unnecessary complexity. The vast majority of existing registration methods use a fixed level of regularisation, which is typically hand-tuned by a user to provide nice" results. However, the optimal level of regularisation will depend on the data which is being processed; lower signal-to-noise ratios require higher regularisation to avoid registering image noise as well as features, and different pairs of images require registrations of varying complexity depending on their anatomical similarity. In this paper we present a probabilistic registration framework that infers the level of regularisation from the data. An additional benefit of this proposed probabilistic framework is that estimates of the registration uncertainty are obtained. This framework has been implemented using a free-form deformation transformation model, although it would be generically applicable to a range of transformation models. We demonstrate our registration framework on the application of inter-subject brain registration of healthy control subjects from the NIREP database. In our results we show that our framework appropriately adapts the level of regularisation in the presence of noise, and that inferring regularisation on an individual basis leads to a reduction in model over-fitting as measured by image folding while providing a similar level of overlap. 011 Elsevier Inc. All rights reserved. Introduction Medical image registration is an important stage in scientific and clinical, group and longitudinal studies. It provides an estimate of the mapping between one image and another. In order to maximise anatomical or functional correspondence between images, non-rigid registration methods provide a mechanism for high-resolution anatomical alignment (Crum et al., 004). These algorithms allow flexible and localised mappings between images. An issue present in all approaches to non-rigid registration is how to regularise the inferred model parameters. In the field of non-rigid registration, regularisation is commonly used to provide a penalty against rough deformations, to ensure that the estimated transformations are spatially smooth. In some regularisation approaches, this has the additional effect of penalising the inference of complex mappings. It is a necessary, but not sufficient condition that the transformation is spatially smooth to maintain the topology of the original image after transformation. The preservation of topology encourages spatially adjacent features in the original image to remain adjacent in the transformed image. It is also appropriate to penalise the complexity of a registration to ensure the plausibility of a mapping. Corresponding author at: Institute of Biomedical Engineering, Department of Engineering Science, Old Road Campus Research Building, University of Oxford, Headington, Oxford OX3 7DQ, UK. address: ivor.simpson@eng.ox.ac.uk (I.J.A. Simpson). Purely maximising a similarity measure can produce very large, complicated and noisy deformations as there is no restriction on the complexity of the mapping that is required to improve the model fit(ashburner and Friston, 1999). This approach of penalising the path length, or deviation from the identity transformation of the inferred mapping, is used in several recent diffeomorphic works on registration (Ashburner, 007; Ashburner and Friston, 011; Avants et al., 008; Beg et al., 005), and it is clear that the smoothest, shortest mapping which leads to an equivalent model fit ispreferable. When using a small deformation framework, such as a free-form deformation (FFD) model (Rueckert et al., 1999) regularisationisalso used to reduce any folding of the image which may occur in complex or noisy transformations. Regularisation often takes the form of membrane, or thin-plate spline bending energy. These simple models penalise deviation from the identity transformation in the second derivative, or bending in the transformation, respectively. In current approaches, these models have a fixed regularisation coefficient which controls the strength of the regularisation. Regularisation parameter values have traditionally been selected using a trial and improvement strategy, where a user finds an appropriate set of parameters which provide qualitatively reasonable results over a specific set of data. As multi-resolution schemes are commonly utilised in non-rigid registration, this would require a user to hand-tune several parameters. Alternatively, regularisation parameters could be selected by testing a range of values and assessing registration performance according to some external metric, such as /$ see front matter 011 Elsevier Inc. All rights reserved. doi: /j.neuroimage

2 I.J.A. Simpson et al. / NeuroImage 59 (01) segmentation accuracy (Yeo et al., 010) which requires manually labelled representative data to train on. However it is derived, a fixed level of regularisation makes the assumption that all data require a similar level of regularisation, whereas the optimal level of regularisation will have a dependence on the data presented to it. For example, it would seem apparent that the anatomy of a particular individual would be more similar to some than others. Therefore, a one size fits all" approach to penalising the complexity in registration will naturally lead to over- or under-constraining the transformation in some circumstances. Furthermore, higher regularisation may be required when there is a low signal-to-noise ratio (SNR) to constrain the optimisation against noise. In this work we propose a novel, principled approach for inferring the regularisation parameter required for non-rigid registration in a data-driven way. This is achieved by modelling the regularisation parameter within a hierarchical Bayesian model. This adaptivity in the registration approach allows flexible treatment for different data and multi-resolution optimisation schemes without necessitating any hand-tuning of the regularisation. In Methods we describe our novel probabilistic framework for non-rigid image registration with an inferred level of spatial regularisation, and demonstrate its application to inter-subject registration of MR images of the human brain. In the following section we describe our novel probabilistic model which is used to drive the registration process and how we can introduce prior information onto the model parameters to provide regularisation. We demonstrate how Variational Bayes can be used to define approximate posterior distributions for our model parameters. As this is a generic framework, any parametrisable transformation model can be used with this probabilistic inference scheme, but for demonstration purposes we implement this framework using a free-form deformation transformation model. We demonstrate that inferring individual regularisation parameters for inter-subject brain registration provides adaptivity to a wide range of SNRs. We also show that individual adaptivity in regularisation yields similarly accurate registrations compared to fixed regularisation, with less image folding as the transformation is more appropriately constrained. Finally, we illustrate the registration uncertainty which arises naturally from this probabilistic framework. Methods The process of image registration can be described probabilistically by using a generative model. The majority of generative models for registration use an image similarity term based on the sum-of squared differences (SSD), which has been previously demonstrated as being appropriate to single-modal brain registration (Ashburner and Friston, 1999). This simple model can be improved upon by explicitly modelling a spatially varying non-linear intensity mapping between images (Andersson et al., 007). Alternative models have also been proposed which permit different image similarity measures, for example the minimisation of the entropy of the joint image distribution (Zöllei et al., 007). However, in our work we choose to use a Gaussian likelihood due to its simple and efficient formulation. It is assumed that the target image data y can be generated from a source image x, when it is deformed using some transformation model, where t(x,w) is the transformed source image, and where w parameterises the transformation. The specific form of t used in this implementation is a free-form deformation (FFD), where w is the set of control point displacements in each direction; the rationale for this choice is discussed in Transformation model. This model will contain some residual error throughout the registration process, either due to misalignment of the images, or because of noise in the image formation process. This error needs to be included within the generative model. The model noise is assumed to be independent and identically distributed (i.i.d.) across image voxels. In this case, thenoise is assumed to be a normally distributed error e N 0; ϕ 1 I, where I represents the identity matrix and ϕ is a global precision (inverse variance) of the error across the image. Therefore, the generic generative model for registration can be formulated as: y ¼ tx; ð wþþe ð1þ Hence, any given voxel, indexed by i, has a likelihood of: Pðy i jx i ; w; ϕþ ¼ ϕ 1exp 1 e i ϕe i ðþ π From Eq. () the log-likelihood of the target image data y given the set of model parameters can be defined: log Pðyjx; w; ϕþ¼ N v log ϕ π ϕ y t x; w ð ð ÞÞT ðy tðx; wþþþconstfw; ϕg ð3þ where N v is the number of voxels containing foreground information in either image, which are also present in the overlap region between the two image domains t(x,w) and y and const{w,ϕ} contains all terms which are constant with respect to w and ϕ. In order to regularise the registration, prior distributions over the parameters are included. This provides a full probabilistic model which is graphically described in Fig. 1. The model parameters are inferred to find the mapping between images. Priors Transformation parameters The set of transformation parameters w needs to be spatially regularised. Regularisation is used to preserve the topology of the source image, by enforcing spatial covariance in the transformation parameters and penalising the registration complexity such that complex transformations are not inferred without sufficient image information. In this case transformation complexity is a measurement of deviation from the spatial prior, which in the case of an elastic approach to regularisation, is the identity transformation with some covariance form as described below. We propose to incorporate regularisation into this probabilistic framework by assigning an appropriate prior distribution to the transformation parameters. In this Fig. 1. A graphical description showing the probabilistic dependencies of the registration model parameters. The variables in square boxes are constants, and the variables in circles are random variables. The image y is generated by some transformation function t, parametrised by w. The prior on w is given in Eq. (4) and is parameterised by λ. The Gaussian noise e of the model has a single parameter ϕ.

3 440 I.J.A. Simpson et al. / NeuroImage 59 (01) case the prior on the transformation model is modelled using a multivariate normal distribution. Pðwjλ Þ¼Nðw;0; λλ 1 ð Þ 1 Þ¼ jλλj ðπ N c Þ exp 1 wt ðλλþw The prior knowledge of w is described in Eq. (4) where Λ encodes our regularisation as a N c N c spatial kernel matrix, details of which are described in Regularisation model. N c is the count of all the transformation parameters, in this case we count every control point in all three directions of motion. In the case of FFDs, this is the number of control points in the image for all three deformation directions. λ is the scalar spatial precision parameter which controls the level of regularisation. λ is modelled as an unknown parameter, therefore it can be determined adaptively from the data resulting in an automated approach to regularisation. Where λ has a constant value, this approach to regularisation is seen in other generative approaches to registration, e.g. in DARTEL (Ashburner, 007) and FNIRT (Andersson et al., 007). The novelty of the proposed approach lies in the inference of λ based on the data, and the inference of a posterior density function for w. Spatial precision As the spatial precision parameter λ is probabilistically modelled, a prior distribution of λ must be specified. The prior on λ is modelled using a Gamma (Ga) distribution. PðλÞ ¼ Gaðλ; s 0 ; c 0 Þ ¼ λ c 0 1 exp λ s 0 Γðc 0 Þs c 0 0 Eq. (5) shows the definition of the prior over λ, with initial scale (s 0 ) and shape (c 0 ) parameters. A Gamma distribution allows us to provide a wide, uninformative prior over the possible values of λ, in this case we use values of s 0 =10 10,c 0 = Noise precision In order to evaluate the optimal regularisation of our parameters, we need to accurately estimate the level of noise in our model fit. This is because of the inherent trade-off between regularisation and maximising the likelihood. We therefore also infer ϕ during the registration. We assume that the residual image has a global noise level that has zero mean and is independent and identically distributed Gaussian noise for each voxel with variance ϕ 1. We model the prior on ϕ as being Gamma distributed: ð4þ ð5þ Parametric transformation models describe the complete image transformation using a linear combination of basis functions. The selected basis set can have either global (Ashburner and Friston, 1999), or local (Rueckert et al., 1999) support within the image. These simple methods allow the transformation to be fully described by a single displacement field with three directional components; however, this constrains the transformation to only be capable of modelling small deformations, hence these are referred to as small deformation-models. These can be improved by using multi-resolution frameworks to limit image folding (Schnabel et al., 001). Although these transformation models have difficulty modelling large deformations, they have substantially fewer parameters to optimise than a time-varying (Beg et al., 005) or stationary (Ashburner, 007; Vercauteren et al., 008) velocityfield approach. Therefore, a FFD transformation model using B-splines to interpolate between control points (Rueckert et al., 1999) was deemed an appropriate choice for demonstrating the inference of regularisation in non-rigid registration. Noise model As described at the beginning of Methods, the noise in model fit is approximated to be zero mean, independently and identically distributed Gaussian noise. We illustrate the reasonableness of this approximation in Fig. using histograms of the residual image (y t(x, w)) from fitted registration models, overlaid with the inferred Gaussian noise distribution. In inter-subject brain registration, misalignment of anatomy, rather than image formation noise will be a large cause of error in model fit. This may add heavier tails to the residual image distribution, although the centre of the distribution is still well approximated as a Gaussian. Unfortunately, the assumption of spatial independence in the noise model is largely incorrect. There are two primary sources of spatial noise covariance to consider; firstly, image data is often pre-smoothed using a Gaussian filter to increase the SNR of the data and preserve features of a specific scale. This is performed with different full width at half maximum (FWHM) values at different multi-resolution levels. Image smoothing introduces additional covariance in the image data, and therefore also in the noise. More importantly, PðϕÞ ¼ Gaðϕ; a 0 ; b 0 Þ ¼ ϕ b 0 1 exp ϕ a 0 Γðb 0 Þa b 0 0 ð6þ where a 0 and b 0 are the initial scale and shape prior hyper-parameter estimates of the distribution. These are again chosen to give a wide, non-informative prior distribution with a 0 =10 10,b 0 = Transformation model A key feature of any non-rigid registration algorithm is the choice of the transformation model. Many transformation methods have been proposed for use in the field of medical image registration (Holden, 008). The proposed regularisation solution penalises the complexity of the deformation parameters, rather than just enforcing smoothness. Therefore an elastic registration model, which penalises deviation from the identity transformation, rather than a viscous fluid registration model (Christensen et al., 1996) is required. Fig.. Two example histograms of the unsmoothed residual image y t(x, w) after fitting the registration model parameters for two sets of high-resolution (1 mm 3 ), high SNR T1 MR images of different human brains taken from the NIREP database (Christensen et al., 006). The dashed overlaid red line shows the estimated i.i.d. Gaussian noise precision. In both of these cases we can see the majority of the residual distribution is well modelled, however the residual distribution has heavier image tails.

4 I.J.A. Simpson et al. / NeuroImage 59 (01) the misalignment of tissue during the registration process introduces spatial correlation in the residual image. This is due to regions of tissue types often being spatially contiguous. An example difference image after affine registration is given in Fig. 3, which illustrates the spatial correlation of the model fit due to misalignment. This spatial covariance in e needs to be compensated for, to avoid over-emphasising the noise precision and therefore the relative importance of the likelihood to the spatial prior. The most direct method to compensate for the spatial covariance of the residual noise (e) is to model spatially smooth noise using a Gaussian process. This would allow an adaptive determination of the noise covariance. Unfortunately, this approach is computationally very demanding. An alternative solution has been proposed with minimal computational overhead. By calculating the number of RESELS (RESolution ELementS), the number of independent signals in the data can be approximated. If this residual noise is assumed as having been smoothed using a Gaussian kernel, the degrees of freedom of the unsmoothed image can be approximated (Worsley et al., 1995). All terms that sum over voxels are weighted by the ratio of the degrees of freedom in the image to the number of voxels in the overlap N v. This is equivalent to decimating the data, a process which reduces the number of data samples to remove redundancy. However, as decimating the data requires removing voxels which may still contain valuable information, a weighting term is used instead, providing a virtual decimation (VD) (Groves et al., 011). The VD weighting factor, α, can be calculated using Eq. (7) as given in (Worsley et al., 1995): α ¼ RESELS ð4logðþ=πþ D= ¼ 0:9394! 0:9394 0:9394 N v FWHM x FWHM y FWHM z where FWHM {x, y, z} is the full width at half maximum (FWHM) of the equivalent Gaussian smoothing kernel in each direction. The smoothing kernel's FWHM is estimated by assessing the correlation between adjacent voxels in each direction and ðfwhm l Þ ¼ logðþ=logðcorr l Þ ð8þ where corr l is the correlation between adjacent voxels in the direction l, l {x,y,z}. This adjustment weights all terms that sum over voxels such that we only consider the number of independent" noise observations. Although this approach is non-bayesian, it is still determined from the data, and therefore fits the adaptive framework that we wish to consider. ð7þ Regularisation model When encoding regularisation as a prior on the deformation parameters, w, it can be encoded using a spatial kernel which is specified in the form of a precision (inverse covariance) matrix. The precision matrix representation is preferred to a covariance matrix as it can describe the regularisation models of interest using a sparse form, unlike the covariance matrix, which in high-resolution registration would be computationally intractable to store. Λ can encode a variety of priors, depending on the choice of regularisation model. In this work we are concerned with using the thin-plate spline bending energy. For each transformation vector direction G, which provides a directional component of the mapping between x and t(x, w), the bending energy equation is given as: X 0 Y 0 Z 0 (!!! "! G þ G þ G!! #) G þ þ G þ G dxdydz x y z x y x z y z ð9þ where x, y, z refer to locations within the co-ordinate system box, bounded by the origin and X,Y,Z. This spatial kernel is derived by differentiating G, with respect to the set of transformation parameters which move in a single axis. A pictorial representation of this kernel, as well as membrane, and elastic energy is given in (Ashburner, 007). As the transformation is wholly defined by the transformation parameters w, the regularisation of these parameters provides regularisation of the transformation. These regularisation models provide a prior distribution on the transformation parameters which either penalise the second derivative of the deviation from the identity transformation, or the bending energy of the transformation parameters. This regularisation model is widely utilised in parameterised registration approaches, such as (Andersson et al., 007; Ashburner, 007; Rueckert et al., 1999). Optimisation We have described our proposed probabilistic generative model for non-rigid registration between two MRI images of the human brain. This generative model, described in Fig. 1, contains a set of unknown parameters which are modelled using probability distributions. These parameters describe the generation of the target image data y using the source image data x. The distributions on the unknown parameters provide a probabilistic description of the image deformation w and describe the error in model fit (ϕ). As prior distributions are specified on the model parameters, the inference of the model parameters must account for this. Bayesian Fig. 3. An initial residual image, y t(x,w), from two high-resolution (1 mm 3 ), high SNR T1 MR images of two different human brains taken from the NIREP database (Christensen et al., 006). The images are affinely aligned using FLIRT (Jenkinson and Smith, 001). As can be seen, there are large clusters of residuals in the model fit, particularly around the edges of the brain and the ventricles. The residuals are clearly highly spatially correlated due to image misalignment. As the alignment improves, the noise will become less correlated.

5 44 I.J.A. Simpson et al. / NeuroImage 59 (01) statistical inference provides the only coherent framework for the adjustment of belief (in the form of probability density functions (PDFs)) in the presence of new information(cox, 1946). Therefore, the Bayesian framework is the most appropriate method to infer on this model. Bayesian inference Bayesian inference provides a principled method to infer on the registration model parameters, given their prior distributions. Prior knowledge is incorporated into inferring the posterior distribution using Bayes rule which states: PðΘjDÞ ¼ PðD; ΘÞ ¼ PðDjΘÞPðΘÞ PðDÞ PðDÞ ð10þ where Θ is the set of model parameters and D is the observable data. The posterior probability of the model parameters P(Θ D) is given in terms of the likelihood of the data given the model parameters P(D Θ), multiplied by the prior probability of the parameters P(Θ) normalised by the model evidence P(D). The evidence is calculated by integrating the likelihood over the set of model parameters, which is often intractable to compute, and for most optimisation methods can be ignored. This gives rise to the following relationship: PðΘjDÞ PðDjΘÞPðΘÞ ð11þ Full Bayesian model inference can be provided using tools such as Markov Chain Monte Carlo (MCMC), but this would be particularly computationally demanding in this setting due to the large number of parameters. Therefore we resort to an approximate solution, using the mean-field variational Bayesian (VB) methodology for model inference. Variational Bayes This work follows the mean-field VB approach for inference of graphical models ( (Jaakkola, 001; Jordan et al., 1999), which builds on previous work using the mean-field approximation (Peterson and Anderson, 1987). VB is related to the popular expectation maximisation (EM) approach (Dempster et al., 1977) andissometimes referred to as variational EM in the literature. VB allows tractable Bayesian inference of an approximate posterior probability distribution of the model parameters by approximating the posterior parameter distribution P(Θ D) asasimpler,parametric probability distribution function q(θ): qðθþ PðΘjDÞ ð1þ The hyper-parameters of q(θ) can then be optimised to fit the distribution. Mean-field VB uses the mean-field approximation over groups of approximate posterior distributions, by assuming independence between groups. In our case, transformation, noise and regularisation parameter distributions are factorised in the approximate posterior: Pðw; ϕ; λ jyþ qðw; ϕ; λþ ¼ qðwþqðϕþqðλþ ð13þ This assumption of independence does not prohibit correlation in the mean of the posteriors, and allows for a tractable solution. As VB is not tractable for arbitrary non-linear forward models, a first order Taylor series approximation is used to provide a linear approximation of the transformation function. Objective function The objective function in VB is the negative variational free energy (F), which provides a summary of the current model fit, whilst penalising the deviation of the parameters from their prior distribution. F provides a lower bound on the log evidence for the model, which becomes closer to the true evidence as the model fit improves. In the case of our registration model, using a mean-field approximation on our inference, F can be written as the sum of four terms: F¼L av D KL ðqðwþjjpðwþþ D KL ðqðλþjjpðλþþ D KL ðqðϕþjjpðϕþþ ð14þ where L av is the expectation of the log-likelihood given in Eq. (3), with respect to both the Gamma distribution on ϕ and the Gaussian noise model. D KL (q p) is the Kullback Leibler (KL) divergence (Kullback and Leibler, 1951) between the approximate posterior q and the prior p for each of the model parameters. The KL terms penalise over-confidence in the parameter estimates and deviation from the prior distribution. Maximisation of F is equivalent to minimising the Kullback Leibler divergence between the approximate posterior distribution q({w,λ,ϕ}) and the true posterior P({w,λ,ϕ} y) which cannot be directly calculated. Variational calculus is used to derive an analytic iterative update for each hyper-parameter of the approximate posterior distributions to maximise the objective function F. The definition of F for this registration model is provided in Appendix A. For computational efficiency in practise we only calculate a small portion of F to measure convergence, which we call C. Model inference Using the VB framework, analytic updates can be derived for the approximate posterior distributions of the transformation, regularisation and noise parameters, which seek to maximise F. Weuse free-form variational Bayes, in which the form of the factorized posterior distribution is not fixed arbitrarily but is instead determined algebraically by combining the likelihood expression with the prior distribution. The derivation of the updates presented below is provided in Appendix B. Inference on transformation parameters The approximate posterior distribution form of the transformation parameters, w, follows a multivariate normal distribution: qðwþ ¼N w; μ; ϒ 1 ð15þ The VB update rules allow for the derivation of the updates for the mean and covariance of the transformation parameters, which fully define the approximate posterior distribution of w: ϒ ¼ α ϕj T J þ λλ h ϒμ new ¼ α ϕj i T ðjμ old þ kþ ð16þ ð17þ where J is the N c N v matrix of first order partial derivatives of the transformation parameters with respect to the transformed image t (x, μ old ), centred about the previous estimate of the mean held in the N c 1 vector μ old. k is a length N v 1 vector representing the residual image y t(x, w). μ new is a N c 1 vector describing the current estimated transformation parameters, and is dependent on the old estimated values. The approximate posterior precision matrix of the set of transformation parameters is given by the N c N c matrix ϒ. λ is the expectation of the posterior spatial precision distribution and ϕ is the expectation of the estimated noise precision. As both μ new and J depend on the current parameters, μ old,these updates need to be iteratively applied until convergence of μ is achieved. These updates are equivalent to those in non-linear leastsquares (NLLS), and for fixed values of ϕ and λ this method is

6 I.J.A. Simpson et al. / NeuroImage 59 (01) equivalent to standard NLLS approaches to registration such as FNIRT (Andersson et al., 007). Inference on regularisation parameters The Variational Bayesian methodology can be used to provide updates on the posterior distribution of the regularisation control parameter λ. The approximate posterior distribution of λ is Gamma distributed, q(λ) =Ga(λ; s, c). The hyper-parameter updates are as follows: c ¼ c 0 þ N c 1 s ¼ 1 þ 1 s 0 Tr ϒ 1 Λ þ μ T Λμ ð18þ ð19þ where Tr refers to the matrix trace operation. The expectation of the approximate posterior distribution over λ, is given as λ ¼ sc. Inference on noise parameters The approximate posterior noise parameter distribution ϕ is Gamma distributed, q(ϕ)=ga(ϕ; a, b), with hyper-parameter updates given by: b ¼ b 0 þ N vα 1 a ¼ 1 þ 1 a 0 α kt k þ Tr ϒ 1 J T J ð0þ ð1þ As described in the Noise model section, N v is scaled by the virtual decimation factor, α, such that it measures the number of independent noise voxels. This is required to prevent over-emphasising ϕ. The expectation of the approximate posterior distribution over ϕ, is given as: ϕ ¼ ab. Implementation Approximations From a practical perspective, the update terms required for both the noise (Eq. (1)) and the smoothness (Eq. (19)) precision contain terms which are computationally unfeasible to calculate. Specifically, the difficult term to compute is the inverse of the posterior precision matrix of the transformation parameters, ϒ 1.Asϒ is a large sparse matrix, calculating the inverse is computationally very intensive, and would require a very large amount of memory. Therefore, only for the updates in Eqs. (1) and (19), an approximation to the posterior covariance matrix is made, which assumes that the control point at each location is independent of its neighbours and only has cross-directional covariance. This allows a sparse inverse approximation which can be rapidly calculated and provides a sufficiently accurate estimation of ϕ and λ, with a slight tendency to over-weight λ and under-estimate ϕ. The accuracy of this approximation is described in Appendix C. For calculating the update on μ in Eq. (17), ϒ 1 is not necessary to be explicitly calculated, as approximate methods can be used on the full precision matrix ϒ to find μ new ; in this implementation a conjugate gradient method was used. Although our updates seek to maximise the negative variational free energy, we choose not to calculate F in full for measuring the convergence of each update. This is because of the large computational expense of calculating the log determinant of very large matrices. Instead we measure C¼α ϕk T k þ λμ T Λμ for convergence after each update for μ. Implementation The 3D implementation of this registration model was incorporated into the FMRIB Non-linear Image Registration Tool (FNIRT) (Andersson et al., 007). FNIRT is a non-linear least-squares (NLLS) implementation of a FFD B-spline model as popularised by (Rueckert et al., 1999). FNIRT uses MAP inference with Gauss-Newton optimisation, using Levenberg Marquadt to deal with regions of non-linearity. It is regularised using a fixed-parameter bending energy prior, with a simple noise model which depends on the SSD of the entire image residual, including background regions. FNIRT was chosen as a basis for implementation of the described model due to its similar formulation to our approach. Both are generative models, solved in a NLLS framework. FNIRT also has a highly efficient mechanism for calculating the Hessian matrix, which would otherwise be computationally expensive. However, it must be stressed that this work describes a generic framework and is not restricted to application in FNIRT. It could be implemented in any other generative model regularised using an elastic prior on the transformation parameters, including the diffeomorphic by design approaches of a stationary velocity field (Ashburner, 007) or using geodesic shooting (Ashburner and Friston, 011), which may have some advantages in mapping larger deformations. An algorithmic summary of the proposed method is presented in Algorithm 1. Algorithm 1. Pseudo-code description of the VB registration algorithm: a 0 =1e 10, b 0 =1e 10, s 0 =1e 10, c 0 =1e 10 // Initialise prior hyper-parameters. Initialise a, b, s, c, μ // Set initial values for noise, regularisation and initial transformation parameters. for i=1 to no. res levels do Smooth and sub-sample images according to the multi-resolution scheme α=vd(y t(x,w)) // Calculate α from the smoothness of the residual image using Eq. (7) while CalculateC() N C old do C old =CalculateC() // Calculate the measurable cost function given the current parameters J=first order derivatives of t(x, μ) // Re-linearise Calculate ϒ // Calculate posterior precision matrix according to Eq. (16) Update μ new // Update μ according to Eq. (17) if CalculateC() N C old then μ=μ new Update a,b,s,c // Update hyper-parameters according to Eqs. (18) to (1) end if end while end for Results Experiments To evaluate our proposed method we first examine the variability in inferred values of λ across a range of signal-to-noise ratio (SNR), resolution levels, and between individuals. We compare our proposed method to the original FNIRT implementation using structural overlap measurements, transformation complexity, and the level of image folding of the transformation. We also compare the individual inference of λ in each registration, as opposed to fixing it based on the average of a set of registrations. Finally, an illustration of the probabilistic confidence output of the proposed framework is provided. Materials All of the experiments conducted used data available in the Nonrigid Image Registration Evaluation Project (NIREP) (Christensen et

7 444 I.J.A. Simpson et al. / NeuroImage 59 (01) al., 006). NIREP contains a set of 16 3D T1 weighted MR images of the human brain, taken from 16 healthy subjects, 8 male (mean age 3.5 years, std. dev. 8.4) and 8 female (mean age 9.8 years, std. dev. 5.8). These images have a high SNR and have 3 accurately delineated cortical labels. These images have been pre-processed by correcting for bias fields and extracting the brains. Prior to our experiments we affinely align each image to the MNI15 atlas using FLIRT (Jenkinson and Smith, 001). Each of our individual non-rigid registrations is then initialised with a rigid alignment between the atlas aligned images. Multi-resolution registration scheme Multi-resolution registration schemes provide an efficient approach to deal with the different scales of deformations required to warp between two subjects. These schemes provide a coarse to fine alignment, with image sub-sampling and wider control point spacing at the coarser resolution levels. In this implementation, the transformation parameters are refined at each multi-resolution level, such that a single set of transformations parameters are estimated for the entire deformation. The images are smoothed prior to sub-sampling, which is both to improve the SNR of the image data and to provide a scale-space representation of the image data which only preserves image features of a specific scale (Witkin, 1987). As different scales of image features are visible at different multi-resolution levels, and each level will have varying residual magnitudes and degrees of freedom which greatly affect the model fit, they will commonly require different levels of regularisation to provide an optimal balance against the noise precision. In our comparison with FNIRT we utilise their standard multiresolution scheme which is recommended for general use in inter-subject structural brain registration. As FNIRT only provides a relatively coarse registration with a control point spacing of 10 mm by default, we extrapolate these default regularisation parameters to provide a more accurate 5 mm control point spacing. We choose both a highly regularised, and a lower regularised 5mm configuration for comparison. The multi-resolution scheme has 4levels of sub-sampling resolutions and control point spacings. At each of these levels, there are two separate sub-levels with differing amounts of regularisation and image pre-smoothing. As discussed in Variability in regularization across signal-to-noise ratios, we also propose a different pre-smoothing scheme. The details of the extrapolated registration multi-resolution scheme are given in Table 1. Variability in inferred regularisation Variability in regularisation across signal-to-noise ratios Theoretically, more regularisation should be used in situations where the image information is of lower quality, in order to avoid the registration becoming under-constrained. Therefore, we choose to measure the variability in the inferred level of regularisation over a range of SNRs. This is achieved by adding Gaussian noise to the source image, which had an original SNR of 45, to produce a set of images with a range of SNR between 10 (0 db) and 45 (33 db). The SNR of the target was estimated at 5 (34 db). Each noisy image is registered to the unmodified target image with the proposed framework. As the anatomy of both images remains the same, the difference in inferred regularisation between the registrations will only depend upon the SNR of the source image. We chose to add noise to the source image as this mimics the common spatial normalisation procedure, where collected data is registered to some common template. This template is often an average of many subjects, and hence has little noise. Adding noise to the target image instead may lead to a slightly different behaviour, as gradients are calculated based on the source Table 1 Multi-resolution scheme used by FNIRT extended to have a final 5 mm control point spacing, with either a lower or higher regularisation at the finest two levels. Multiresolution level Image resolution (mm) Control point spacing (mm) FNIRT presmoothing FWHM (mm) Moderate presmoothing FWHM (mm) Fixed FNIRT λ 1a b a b a b a /0 4b /10 image, and noise in the source image is smoothed by the interpolation used to warp the source image. 0 noisy images were sampled for each SNR level to provide error bounds on our estimates. The details of the pre-smoothing scheme are likely to affect the resulting registration. Image smoothing increases the SNR of the data, but preserves fewer small features. Therefore, we may expect that pre-smoothing using a larger kernel will result in smoother deformation fields, even if a constant level of regularisation were used. Furthermore, there will exist a dependence of the necessary regularisation on the level of pre-smoothing. Finding an optimal pre-smoothing is not the objective of this paper, and it would be likely to vary between applications. We choose to test three schemes; the original FNIRT scheme, a scheme employing no pre-smoothing and a moderate sized blurring scheme. The moderate pre-smoothing scheme was proposed as an intermediate between the higher pre-smoothing used by FNIRT and none. Details of the presmoothing schemes are given in Table 1. The results of this experiment are shown in Fig. 4. Where no presmoothing is used, we see an exponential trend to infer higher spatial precision at lower SNR, heavily constraining the registration in these cases. Using the FNIRT pre-smoothing scheme shows a more linear variation in λ with SNR, and we also see that λ remains substantially higher at high SNR compared to the other two schemes. This is because the substantial smoothing of the images removes some of the λ FNIRT pre smoothing No pre smoothing Moderate pre smoothing SNR Fig. 4. Plot showing the inferred spatial precision (λ), at the finest multi-resolution level when registering a single pair of images taken from the NIREP database. Independent additive Gaussian noise was added to the source image at a range of signal-to-noise ratios between 10 (0 db) and 45 (33 db). The error bars shown in the plot describe the standard deviation of these inferred values across twenty separate registrations with different instances of random noise added to the source images. These images were registered under three different image pre-smoothing schemes, the standard FNIRT scheme, a moderate pre-smoothing scheme, both of which are described in Table 1, and a scheme with no image pre-smoothing. At low SNR and without pre-smoothing the data, we see very high regularisation which decreases exponentially with improved SNR which overly constrains the registration where noise is present. Using the FNIRT pre-smoothing scheme yields a more linear trend in regularisation with SNR, but only produces overly smooth deformations, even at high SNR. The moderate pre-smoothing scheme provides a balance between the two schemes, with lower regularisation at high SNR and a linear relationship between SNR and λ.

8 I.J.A. Simpson et al. / NeuroImage 59 (01) image features required to drive a high-resolution registration. The moderate pre-smoothing scheme provides a trade-off between the two schemes, with a linear decrease in λ with SNR, and a reasonably low λ at high SNR, permitting accurate registration. We therefore choose to use the moderate pre-smoothing scheme with the proposed method in our experiments. Variability in regularisation across subjects and multi-resolution levels Each level of the multi-resolution scheme will require a different level of regularisation depending on the control point spacing, the image pre-smoothing and the image sub-sampling, as these factors affect either the transformation model, or the data to be registered. Additionally, each pair of individual brain images will require a different regularisation level to provide the mapping between them, which will depend on both the SNR of the source and target images, and the anatomical differences. In this experiment we register all 16 subjects in the NIREP database to one another, resulting in a total of 40 registrations. Each of the images in NIREP was acquired using the same protocol, hence their SNR should be similar. Therefore we can assume that the majority of variability in inferred regularisation is due to anatomical differences. The resulting inferred λ values, in terms of their FNIRT equivalent values, are shown in Fig. 5. The FNIRT equivalent values are calculated by calculating the FNIRT λ value which would be given for the ratio of the inferred spatial precision over noise precision. Fig. 5 illustrates the variability in λ across subjects at each multiresolution level and allows comparison with the original FNIRT configuration. As can be seen, λ has a wide variability between subjects at each stage of the registration. This variability in inferred regularisation across pairwise examples is intuitive, as the complexity of any deformation necessary to map between two individual brains is likely to vary widely depending on the subjects presented. This is further demonstrated in the next section, which shows a wide variability in transformation complexity across registrations. There appears to be no single λ which is inferred and would provide an optimal level of regularisation. The level of λ will depend not only SNR, but also on the structural differences between individual subjects and therefore should be individually selected. Comparison against FNIRT It is appropriate to compare the performance of the proposed framework against the closely related method FNIRT. Each of the 16 NIREP subjects is again registered to every other subject giving a total of 40 registrations. The proposed method was compared against FNIRT with a high and low level of regularisation at the highest resolution level. We compare these methods in terms of transformation complexity, image folding and overlap of segmented regions. Fig. 6(a) shows the level of complexity across all 40 registration for each method. As can be seen, for fixed levels of regularisation the bending energy forms a tight distribution. When the level of regularisation is inferred, a much larger spread of transformation complexity is obtained FNIRT equivalent λ Level 1b FNIRT equivalent λ Level b Density Density FNIRT equivalent λ FNIRT equivalent λ FNIRT equivalent λ Level 3b FNIRT equivalent λ Level 4b Density Density FNIRT equivalent λ FNIRT equivalent λ Fig. 5. Histogram and fitted gamma distribution plot of the inferred FNIRT equivalent values of λ, over the 40 pairwise registration of the NIREP database for levels 1b (coarsest), b, 3b and 4b (finest). The dashed red line indicates the λ value utilised in the original FNIRT configuration, the dash-dot blue line for level 4b indicates the low regularisation scheme for the finest levels. Note the similarity between the mode of the distribution in levels b, 3b and 4b to the hand-defined FNIRT values, although the variance of the inferred FNIRT λ distributions is reasonably wide. The main difference between the inferred and hand-defined values lies at the coarsest resolution level, which infers a much higher level of regularisation.

9 446 I.J.A. Simpson et al. / NeuroImage 59 (01) with a similar mean complexity compared to the higher regularised FNIRT scheme. Where FNIRT uses little regularisation at the finest level, a substantially higher transformation complexity is obtained than with other settings. The percentage of folded voxels in the transformation can be considered a surrogate measure of model over-fitting, indicating how under-constrained (under-regularised) the registration is, as shown in Fig. 6(b). Here, FNIRT with low regularisation leads to a large amount of image folding. FNIRT with higher fixed regularisation still has a mode of 40% more folded voxels than the proposed framework. Therefore, we may conclude that fixed regularisation methods over-fit the data in some cases, by providing a more complicated transformation without sufficient information to justify this complexity. Fig. 7 shows the average overlap of the 3 segmented cortical regions over the 40 registrations for each method measured using the Dice metric (Dice, 1945), which is defined as: D ¼ ja Bj jajþjbj ðþ where A represents the propagated segmentation labels, and B the target label segmentation. As can be seen, after the affine registration using FLIRT, the average overlap of these regions is very low. FNIRT using a low fixed level of regularisation provides the highest level of overlap, at the expense of much increased complexity and image folding as described previously. With a higher level of regularisation, FNIRT suffers from more negative Jacobians and achieves a slightly lower average overlap than our inferred method. Our method shows a large improvement in overlap from initial alignment using FLIRT and provides a more flexible approach to registration which results in a better constrained transformation. Comparison with average of inferred regularisation values In order to illustrate whether regularisation needs to be inferred on an individual basis, or to establish whether this framework should be used to infer a good set of fixed parameter values, we re-run all 40 registrations using our framework, but without inferring the value of λ and instead using the average inferred λ across the 40 registrations. This is the right-most plot in Figs. 6(a), (b), and 7. As could be expected, some Fig. 7. A boxplot showing the average overlap scores (Dice) over the 3 segmented cortical regions between the 4 different non-rigid methods and FLIRT from the 40 pairwise registrations of the NIREP database. As can be seen, all the FFD methods outperform the affine registration of FLIRT. FNIRT with low λ at the finest levels achieved the highest overlap, at the cost of a lot of transformation complexity and image folding (Fig. 6(a) and (b)). The proposed framework achieves a very similar result in terms of overlap to that of FNIRT with high regularisation, and the VB framework with fixed regularisation, but with a reduction in the level of image folding. of the variability in the transformation complexity is lost as a result of each registration using the same amount of regularisation. From Fig. 7 we can see that the overlap is very slightly improved with this method over individual adaptivity, but this comes at the expense of a 5% increase in the mode of the number of negative Jacobians. In Fig. 8, we show an example registration where λ is inferred, or is fixed at the mean of the inferred λ values. In this example we show the propagation of a cortical label (right post-central gyrus) which is better aligned where a higher level of regularisation is inferred for all of the multi-resolution levels except for the finest. In this case inferring λ results in a 5% decrease in folding, and leads to a small increase in average overlap. Based on these results, we can summarise that even for a similar group of healthy individuals, without individually inferred regularisation, the registration will be improperly constrained in more cases, resulting in less smooth mappings which have little to no improvement in overlap. An alternative approach would have been to use an informative prior distribution for λ,wheres 0 and c 0 were derived from the individual runs. Fig. 6. Violin plots illustrating the probability density functions of the bending energy, and percentage of voxels with negative Jacobians across the different non-rigid methods. Violin plots are interpreted as a superposition of a kernel density estimation plot (in blue), on top of a boxplot (black box, and line). The data in these plots comes from the 40 pairwise registration of the 16 individuals in the NIREP database. The plot on the left shows the level of bending energy, which measures transformation complexity, across the different non-rigid methods. The fixed regularisation FNIRT approaches have a very tight distribution, implying a similar level of complexity used to map between all pairs of individuals. Conversely, where λ is inferred from the data, the level of bending energy takes a much wider range of values, as the transformation complexity is data dependent. With the VB approach with λ fixed to the mean of the inferred distribution, we still see a fairly wide distribution due to the inferred noise level. Plot (b) shows the percentage of voxels with negative Jacobians (image folding) across the different non-rigid methods. The level of folding is substantially higher in FNIRT where λ is low at the finest resolution levels, compared to all other methods. Where λ is inferred from the data we see a lower mode and median than where a fixed level of regularisation is used.

10 I.J.A. Simpson et al. / NeuroImage 59 (01) Fig. 8. An example registration result, highlighting some issues with using an inappropriate level of regularisation. The registration is from subject to subject 1 of the NIREP database, where the inferred λ is between 15 and 5% higher than the average λ for all multi-resolution levels except the finest which is similar. The illustrations show 3 registration results, with an overlaid map showing the label status for the right post-central gyrus, with label overlap status overlaid in colour. Using affine alignment, the structure is poorly aligned. Where λ is inferred, an improved alignment is seen. The average inferred λ which uses less regularisation at the coarser multi-resolution levels produces a worse label overlap than inferring λ from the data. In this case, across the image inferring higher λ results in a 5% reduction in voxel folding as well as a small increase in average label overlap. Discussion and Conclusions Discussion We have proposed a framework for inferring the level of spatial regularisation in non-rigid registration as part of a probabilistic inference scheme. The inference of spatial regularisation control parameters in this manner has been previously demonstrated in general linear models for fmri analysis (Woolrich et al., 004) and mixture models for segmentation (Woolrich and Behrens, 006). Some related work has occurred in the registration field, in particular in the work of Allassonniére et al. (007), where a probabilistic model is used to estimate the prior covariance matrix of the set of deformations required to warp an unknown template to a set of examples. More closely related to the proposed method is the work of Risholm et al. (010b), who present an approach to estimate the elastic material parameters in intra-subject brain registration. Their approach allows for the estimation of the two Lamé parameters governing the elastic energy prior which may vary over space, or across spatial compartments. Their model is inferred upon using MCMC, which does not require assumptions to be made about the distribution of the posterior. An advantage of both of these alternative methods over the presented work is the spatial non-stationarity of the transformation parameter covariance, whereas our method infers only a single regularisation parameter λ. However, our proposed approach has a major practical advantage in providing computationally tractable inference for high-resolution 3D registration using mean-field VB inference. Using an optimal level of regularisation is important for inferring an accurate and smooth mapping. Over-regularisation will always lead to an inaccurate mapping, as the registration is overly constrained. Whereas under-regularisation allows the inference of unjustifiably complex mappings, i.e. adding complexity for minimal benefit in the model fit. Under-regularisation, particularly at coarser multi-resolution levels, can lead to convergence at poor local minima, and if not properly configured for particularly noisy data, can also lead to the registration of image noise. When using a small deformation model, such as a FFD model, image folding is a further symptom of under-regularisation. We have demonstrated that our proposed approach provides an appropriate framework for non-rigid registration which adapts to the SNR of the data and the anatomical variability between subjects. A benefit of such an adaptive approach to regularisation in nonrigid registration is that the regularisation parameters are not required to be derived using a time consuming trial and improvement approach, which will likely need to be re-tuned for different datasets. Regularisation parameter values could be estimated by assessing registration performance on a training set of labelled data according to some quantitative metric. However, this relies on having sufficient labelled training data which has similar properties to any data which will be registered. Allowing the inference of regularisation on an individual basis overcomes the one size fits all" approach to registration complexity. This is likely to be flawed due to cross-population anatomical variability, which implies that a fixed regularisation solution will be under- or over-constrained in some situations. In this work we have inferred a wide range of values for λ across different registrations at each multi-resolution level. These values produce equivalent average structural overlap measurements to hand-tuned regularisation methods, but in general result in a better constrained mapping, as indicated by the lower level of image folding. The differences in bending energy across the set of 40 registrations when inferring λ show that varying complexity is required for registering different individuals from a population, even where all the subjects are healthy controls from a reasonably small age range. There are two key advantages in our model formulation using a principled probabilistic framework; firstly, as our framework is fully probabilistic we have an estimate of the uncertainty of the registration model parameters. Of these parameters, most interestingly we have obtained an estimate of the uncertainty of the transformation parameters in the form of a covariance matrix. This provides both a measure of the spatial uncertainty of each voxel in the transformed source image, and of any registration derived features, for example Jacobian features. An example map illustrating the estimated spatial variance of each direction is given in Fig. 9. This map shows the variance of each transformation parameter, interpolated across the image domain. The variance was accurately estimated by inverting a 5 3 set of transformation parameters, in all 3 directions, for each control point. Previous attempts to produce probabilistic output from registration have utilised stochastic variation of control points (Hub et al., 009), whereas our method explicitly models the variance of each transformation parameter. Other attempts have analysed the Hessian of the cost function under the assumption of normality (Kybic, 010), however this does not use a

11 448 I.J.A. Simpson et al. / NeuroImage 59 (01) Fig. 9. An example map of the spatial uncertainty of our transformation. The top row shows the transformed source image, the next three rows contain maps showing the standard deviation, for each transformation direction, of the final mapping between images. Notice the lower uncertainty surrounding edges, and higher uncertainty in homogeneous regions. full probabilistic model, whereas our method models the image noise and regularisation, both of which will greatly affect the variance of the method. Risholm et al. (010a) discuss some possible uses of the probabilistic output from their Bayesian elastic registration tool. However, their estimates of uncertainty are limited due to the ad-hoc regularisation that they use for this particular task. Additionally, their approach is computationally expensive for estimating a high-resolution mapping, as they use MCMC for inference. In contrast, the method we propose provides a tractable and principled approach to estimating both the regularisation weighting, and approximating the level of spatial uncertainty in high-resolution registration. We envisage that this spatial uncertainty information couldbeutilisedinfurtherstatistical processing for a wide variety of tasks. In (Simpson et al., 011) we showed that the mapping uncertainty could be used to estimate an appropriate smoothing kernel to compensate for misregistration. This was shown to increase the ability to classify spatially normalised subjects with Alzheimer's disease from healthy elderly controls. Further work could either take the form of directly modelling spatial uncertainty at a group statistics level, e.g. in fmri data analysis as demonstrated in (Keller et al., 008), or of incorporating this uncertainty into registration derived features which we might use for further analysis, such as Jacobian values for tensor based morphometry, which will also be probabilistic. The second advantage of our formulation is that we have developed a generic Bayesian framework for non-rigid registration which would allow us to incorporate more complex adaptive spatial priors, or noise models, providing a more flexible approach to registration. For example, a more complex spatial prior could be inferred in the same manner, this could allow the inference of multiple λ values either varying over space, or for a linear combination of precision components, for example the Lamé parameters for linear elastic regularisation. This framework could also be implemented with a variety of transformation models, including those which are guaranteed to produce diffeomorphic mappings, for example stationary velocity fields (Ashburner, 007) or variable velocity fields via geodesic shooting (Ashburner and Friston, 011). This could lead to some improvements in modelling larger deformations. The VB framework also permits model comparison using the variational free energy F which could be used, for example, to compare different control point spacings or spatial priors (as long as they are non-singular). However, for the framework presented here there are two restrictive factors: firstly the VD factor α, used to compensate for the spatial covariance of the residual image defined in Noise model, would need to be fixed between methods, as this would otherwise alter the data. Additionally, the Taylor series expansion will give different uncertainties at different local minima. Therefore, we decided to not consider model comparison in this paper. One of the questions which remains unanswered in this paper is how to pre-smooth the images. For this work, we chose a moderate level of image pre-smoothing image to provide accurate registration across a range of SNRs. However, the optimal image smoothing is

Methods for data preprocessing

Methods for data preprocessing Methods for data preprocessing John Ashburner Wellcome Trust Centre for Neuroimaging, 12 Queen Square, London, UK. Overview Voxel-Based Morphometry Morphometry in general Volumetrics VBM preprocessing

More information

Computational Neuroanatomy

Computational Neuroanatomy Computational Neuroanatomy John Ashburner john@fil.ion.ucl.ac.uk Smoothing Motion Correction Between Modality Co-registration Spatial Normalisation Segmentation Morphometry Overview fmri time-series kernel

More information

Preprocessing II: Between Subjects John Ashburner

Preprocessing II: Between Subjects John Ashburner Preprocessing II: Between Subjects John Ashburner Pre-processing Overview Statistics or whatever fmri time-series Anatomical MRI Template Smoothed Estimate Spatial Norm Motion Correct Smooth Coregister

More information

Nonrigid Registration using Free-Form Deformations

Nonrigid Registration using Free-Form Deformations Nonrigid Registration using Free-Form Deformations Hongchang Peng April 20th Paper Presented: Rueckert et al., TMI 1999: Nonrigid registration using freeform deformations: Application to breast MR images

More information

Longitudinal Brain MRI Analysis with Uncertain Registration

Longitudinal Brain MRI Analysis with Uncertain Registration Longitudinal Brain MRI Analysis with Uncertain Registration Ivor J.A. Simpson 1,2,,MarkW.Woolrich 2,3,AdrianR.Groves 2, and Julia A. Schnabel 1 1 Institute of Biomedical Engineering, University of Oxford,

More information

A Bayesian Approach for Spatially Adaptive Regularisation in Non-rigid Registration

A Bayesian Approach for Spatially Adaptive Regularisation in Non-rigid Registration A Bayesian Approach for Spatially Adaptive Regularisation in Non-rigid Registration Ivor J.A. Simpson 1,2,, Mark W. Woolrich 3, Manuel Jorge Cardoso 1, David M. Cash 1, Marc Modat 1, Julia A. Schnabel

More information

An Introduction To Automatic Tissue Classification Of Brain MRI. Colm Elliott Mar 2014

An Introduction To Automatic Tissue Classification Of Brain MRI. Colm Elliott Mar 2014 An Introduction To Automatic Tissue Classification Of Brain MRI Colm Elliott Mar 2014 Tissue Classification Tissue classification is part of many processing pipelines. We often want to classify each voxel

More information

A Model-Independent, Multi-Image Approach to MR Inhomogeneity Correction

A Model-Independent, Multi-Image Approach to MR Inhomogeneity Correction Tina Memo No. 2007-003 Published in Proc. MIUA 2007 A Model-Independent, Multi-Image Approach to MR Inhomogeneity Correction P. A. Bromiley and N.A. Thacker Last updated 13 / 4 / 2007 Imaging Science and

More information

Image Registration + Other Stuff

Image Registration + Other Stuff Image Registration + Other Stuff John Ashburner Pre-processing Overview fmri time-series Motion Correct Anatomical MRI Coregister m11 m 21 m 31 m12 m13 m14 m 22 m 23 m 24 m 32 m 33 m 34 1 Template Estimate

More information

1 Introduction Motivation and Aims Functional Imaging Computational Neuroanatomy... 12

1 Introduction Motivation and Aims Functional Imaging Computational Neuroanatomy... 12 Contents 1 Introduction 10 1.1 Motivation and Aims....... 10 1.1.1 Functional Imaging.... 10 1.1.2 Computational Neuroanatomy... 12 1.2 Overview of Chapters... 14 2 Rigid Body Registration 18 2.1 Introduction.....

More information

Functional MRI data preprocessing. Cyril Pernet, PhD

Functional MRI data preprocessing. Cyril Pernet, PhD Functional MRI data preprocessing Cyril Pernet, PhD Data have been acquired, what s s next? time No matter the design, multiple volumes (made from multiple slices) have been acquired in time. Before getting

More information

Performance Evaluation of the TINA Medical Image Segmentation Algorithm on Brainweb Simulated Images

Performance Evaluation of the TINA Medical Image Segmentation Algorithm on Brainweb Simulated Images Tina Memo No. 2008-003 Internal Memo Performance Evaluation of the TINA Medical Image Segmentation Algorithm on Brainweb Simulated Images P. A. Bromiley Last updated 20 / 12 / 2007 Imaging Science and

More information

Learning-based Neuroimage Registration

Learning-based Neuroimage Registration Learning-based Neuroimage Registration Leonid Teverovskiy and Yanxi Liu 1 October 2004 CMU-CALD-04-108, CMU-RI-TR-04-59 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract

More information

Ensemble registration: Combining groupwise registration and segmentation

Ensemble registration: Combining groupwise registration and segmentation PURWANI, COOTES, TWINING: ENSEMBLE REGISTRATION 1 Ensemble registration: Combining groupwise registration and segmentation Sri Purwani 1,2 sri.purwani@postgrad.manchester.ac.uk Tim Cootes 1 t.cootes@manchester.ac.uk

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Overview of Part Two Probabilistic Graphical Models Part Two: Inference and Learning Christopher M. Bishop Exact inference and the junction tree MCMC Variational methods and EM Example General variational

More information

Neuroimaging and mathematical modelling Lesson 2: Voxel Based Morphometry

Neuroimaging and mathematical modelling Lesson 2: Voxel Based Morphometry Neuroimaging and mathematical modelling Lesson 2: Voxel Based Morphometry Nivedita Agarwal, MD Nivedita.agarwal@apss.tn.it Nivedita.agarwal@unitn.it Volume and surface morphometry Brain volume White matter

More information

Structural Segmentation

Structural Segmentation Structural Segmentation FAST tissue-type segmentation FIRST sub-cortical structure segmentation FSL-VBM voxelwise grey-matter density analysis SIENA atrophy analysis FAST FMRIB s Automated Segmentation

More information

Fmri Spatial Processing

Fmri Spatial Processing Educational Course: Fmri Spatial Processing Ray Razlighi Jun. 8, 2014 Spatial Processing Spatial Re-alignment Geometric distortion correction Spatial Normalization Smoothing Why, When, How, Which Why is

More information

The Anatomical Equivalence Class Formulation and its Application to Shape-based Computational Neuroanatomy

The Anatomical Equivalence Class Formulation and its Application to Shape-based Computational Neuroanatomy The Anatomical Equivalence Class Formulation and its Application to Shape-based Computational Neuroanatomy Sokratis K. Makrogiannis, PhD From post-doctoral research at SBIA lab, Department of Radiology,

More information

Methodological progress in image registration for ventilation estimation, segmentation propagation and multi-modal fusion

Methodological progress in image registration for ventilation estimation, segmentation propagation and multi-modal fusion Methodological progress in image registration for ventilation estimation, segmentation propagation and multi-modal fusion Mattias P. Heinrich Julia A. Schnabel, Mark Jenkinson, Sir Michael Brady 2 Clinical

More information

Structural Segmentation

Structural Segmentation Structural Segmentation FAST tissue-type segmentation FIRST sub-cortical structure segmentation FSL-VBM voxelwise grey-matter density analysis SIENA atrophy analysis FAST FMRIB s Automated Segmentation

More information

CHAPTER 2. Morphometry on rodent brains. A.E.H. Scheenstra J. Dijkstra L. van der Weerd

CHAPTER 2. Morphometry on rodent brains. A.E.H. Scheenstra J. Dijkstra L. van der Weerd CHAPTER 2 Morphometry on rodent brains A.E.H. Scheenstra J. Dijkstra L. van der Weerd This chapter was adapted from: Volumetry and other quantitative measurements to assess the rodent brain, In vivo NMR

More information

Automatic Registration-Based Segmentation for Neonatal Brains Using ANTs and Atropos

Automatic Registration-Based Segmentation for Neonatal Brains Using ANTs and Atropos Automatic Registration-Based Segmentation for Neonatal Brains Using ANTs and Atropos Jue Wu and Brian Avants Penn Image Computing and Science Lab, University of Pennsylvania, Philadelphia, USA Abstract.

More information

Level-set MCMC Curve Sampling and Geometric Conditional Simulation

Level-set MCMC Curve Sampling and Geometric Conditional Simulation Level-set MCMC Curve Sampling and Geometric Conditional Simulation Ayres Fan John W. Fisher III Alan S. Willsky February 16, 2007 Outline 1. Overview 2. Curve evolution 3. Markov chain Monte Carlo 4. Curve

More information

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu FMA901F: Machine Learning Lecture 3: Linear Models for Regression Cristian Sminchisescu Machine Learning: Frequentist vs. Bayesian In the frequentist setting, we seek a fixed parameter (vector), with value(s)

More information

Voxel-Based Morphometry & DARTEL. Ged Ridgway, London With thanks to John Ashburner and the FIL Methods Group

Voxel-Based Morphometry & DARTEL. Ged Ridgway, London With thanks to John Ashburner and the FIL Methods Group Zurich SPM Course 2012 Voxel-Based Morphometry & DARTEL Ged Ridgway, London With thanks to John Ashburner and the FIL Methods Group Aims of computational neuroanatomy * Many interesting and clinically

More information

Challenge Problem 5 - The Solution Dynamic Characteristics of a Truss Structure

Challenge Problem 5 - The Solution Dynamic Characteristics of a Truss Structure Challenge Problem 5 - The Solution Dynamic Characteristics of a Truss Structure In the final year of his engineering degree course a student was introduced to finite element analysis and conducted an assessment

More information

SPM8 for Basic and Clinical Investigators. Preprocessing. fmri Preprocessing

SPM8 for Basic and Clinical Investigators. Preprocessing. fmri Preprocessing SPM8 for Basic and Clinical Investigators Preprocessing fmri Preprocessing Slice timing correction Geometric distortion correction Head motion correction Temporal filtering Intensity normalization Spatial

More information

Supplementary methods

Supplementary methods Supplementary methods This section provides additional technical details on the sample, the applied imaging and analysis steps and methods. Structural imaging Trained radiographers placed all participants

More information

Whole Body MRI Intensity Standardization

Whole Body MRI Intensity Standardization Whole Body MRI Intensity Standardization Florian Jäger 1, László Nyúl 1, Bernd Frericks 2, Frank Wacker 2 and Joachim Hornegger 1 1 Institute of Pattern Recognition, University of Erlangen, {jaeger,nyul,hornegger}@informatik.uni-erlangen.de

More information

The organization of the human cerebral cortex estimated by intrinsic functional connectivity

The organization of the human cerebral cortex estimated by intrinsic functional connectivity 1 The organization of the human cerebral cortex estimated by intrinsic functional connectivity Journal: Journal of Neurophysiology Author: B. T. Thomas Yeo, et al Link: https://www.ncbi.nlm.nih.gov/pubmed/21653723

More information

Shape-based Diffeomorphic Registration on Hippocampal Surfaces Using Beltrami Holomorphic Flow

Shape-based Diffeomorphic Registration on Hippocampal Surfaces Using Beltrami Holomorphic Flow Shape-based Diffeomorphic Registration on Hippocampal Surfaces Using Beltrami Holomorphic Flow Abstract. Finding meaningful 1-1 correspondences between hippocampal (HP) surfaces is an important but difficult

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Non-Rigid Multimodal Medical Image Registration using Optical Flow and Gradient Orientation

Non-Rigid Multimodal Medical Image Registration using Optical Flow and Gradient Orientation M. HEINRICH et al.: MULTIMODAL REGISTRATION USING GRADIENT ORIENTATION 1 Non-Rigid Multimodal Medical Image Registration using Optical Flow and Gradient Orientation Mattias P. Heinrich 1 mattias.heinrich@eng.ox.ac.uk

More information

Problem Set 4. Assigned: March 23, 2006 Due: April 17, (6.882) Belief Propagation for Segmentation

Problem Set 4. Assigned: March 23, 2006 Due: April 17, (6.882) Belief Propagation for Segmentation 6.098/6.882 Computational Photography 1 Problem Set 4 Assigned: March 23, 2006 Due: April 17, 2006 Problem 1 (6.882) Belief Propagation for Segmentation In this problem you will set-up a Markov Random

More information

Functional MRI in Clinical Research and Practice Preprocessing

Functional MRI in Clinical Research and Practice Preprocessing Functional MRI in Clinical Research and Practice Preprocessing fmri Preprocessing Slice timing correction Geometric distortion correction Head motion correction Temporal filtering Intensity normalization

More information

Basic fmri Design and Analysis. Preprocessing

Basic fmri Design and Analysis. Preprocessing Basic fmri Design and Analysis Preprocessing fmri Preprocessing Slice timing correction Geometric distortion correction Head motion correction Temporal filtering Intensity normalization Spatial filtering

More information

MR IMAGE SEGMENTATION

MR IMAGE SEGMENTATION MR IMAGE SEGMENTATION Prepared by : Monil Shah What is Segmentation? Partitioning a region or regions of interest in images such that each region corresponds to one or more anatomic structures Classification

More information

Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques

Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques Sea Chen Department of Biomedical Engineering Advisors: Dr. Charles A. Bouman and Dr. Mark J. Lowe S. Chen Final Exam October

More information

Note Set 4: Finite Mixture Models and the EM Algorithm

Note Set 4: Finite Mixture Models and the EM Algorithm Note Set 4: Finite Mixture Models and the EM Algorithm Padhraic Smyth, Department of Computer Science University of California, Irvine Finite Mixture Models A finite mixture model with K components, for

More information

Efficient population registration of 3D data

Efficient population registration of 3D data Efficient population registration of 3D data Lilla Zöllei 1, Erik Learned-Miller 2, Eric Grimson 1, William Wells 1,3 1 Computer Science and Artificial Intelligence Lab, MIT; 2 Dept. of Computer Science,

More information

Detecting Changes In Non-Isotropic Images

Detecting Changes In Non-Isotropic Images Detecting Changes In Non-Isotropic Images K.J. Worsley 1, M. Andermann 1, T. Koulis 1, D. MacDonald, 2 and A.C. Evans 2 August 4, 1999 1 Department of Mathematics and Statistics, 2 Montreal Neurological

More information

An Efficient Model Selection for Gaussian Mixture Model in a Bayesian Framework

An Efficient Model Selection for Gaussian Mixture Model in a Bayesian Framework IEEE SIGNAL PROCESSING LETTERS, VOL. XX, NO. XX, XXX 23 An Efficient Model Selection for Gaussian Mixture Model in a Bayesian Framework Ji Won Yoon arxiv:37.99v [cs.lg] 3 Jul 23 Abstract In order to cluster

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Basic principles of MR image analysis. Basic principles of MR image analysis. Basic principles of MR image analysis

Basic principles of MR image analysis. Basic principles of MR image analysis. Basic principles of MR image analysis Basic principles of MR image analysis Basic principles of MR image analysis Julien Milles Leiden University Medical Center Terminology of fmri Brain extraction Registration Linear registration Non-linear

More information

CS 231A Computer Vision (Fall 2012) Problem Set 3

CS 231A Computer Vision (Fall 2012) Problem Set 3 CS 231A Computer Vision (Fall 2012) Problem Set 3 Due: Nov. 13 th, 2012 (2:15pm) 1 Probabilistic Recursion for Tracking (20 points) In this problem you will derive a method for tracking a point of interest

More information

Introduction to fmri. Pre-processing

Introduction to fmri. Pre-processing Introduction to fmri Pre-processing Tibor Auer Department of Psychology Research Fellow in MRI Data Types Anatomical data: T 1 -weighted, 3D, 1/subject or session - (ME)MPRAGE/FLASH sequence, undistorted

More information

Discussion on Bayesian Model Selection and Parameter Estimation in Extragalactic Astronomy by Martin Weinberg

Discussion on Bayesian Model Selection and Parameter Estimation in Extragalactic Astronomy by Martin Weinberg Discussion on Bayesian Model Selection and Parameter Estimation in Extragalactic Astronomy by Martin Weinberg Phil Gregory Physics and Astronomy Univ. of British Columbia Introduction Martin Weinberg reported

More information

Graphical Models, Bayesian Method, Sampling, and Variational Inference

Graphical Models, Bayesian Method, Sampling, and Variational Inference Graphical Models, Bayesian Method, Sampling, and Variational Inference With Application in Function MRI Analysis and Other Imaging Problems Wei Liu Scientific Computing and Imaging Institute University

More information

STATISTICAL ATLAS-BASED SUB-VOXEL SEGMENTATION OF 3D BRAIN MRI

STATISTICAL ATLAS-BASED SUB-VOXEL SEGMENTATION OF 3D BRAIN MRI STATISTICA ATAS-BASED SUB-VOXE SEGMENTATION OF 3D BRAIN MRI Marcel Bosc 1,2, Fabrice Heitz 1, Jean-Paul Armspach 2 (1) SIIT UMR-7005 CNRS / Strasbourg I University, 67400 Illkirch, France (2) IPB UMR-7004

More information

Learning Alignments from Latent Space Structures

Learning Alignments from Latent Space Structures Learning Alignments from Latent Space Structures Ieva Kazlauskaite Department of Computer Science University of Bath, UK i.kazlauskaite@bath.ac.uk Carl Henrik Ek Faculty of Engineering University of Bristol,

More information

ADAPTIVE GRAPH CUTS WITH TISSUE PRIORS FOR BRAIN MRI SEGMENTATION

ADAPTIVE GRAPH CUTS WITH TISSUE PRIORS FOR BRAIN MRI SEGMENTATION ADAPTIVE GRAPH CUTS WITH TISSUE PRIORS FOR BRAIN MRI SEGMENTATION Abstract: MIP Project Report Spring 2013 Gaurav Mittal 201232644 This is a detailed report about the course project, which was to implement

More information

Spatial Preprocessing

Spatial Preprocessing Spatial Preprocessing Overview of SPM Analysis fmri time-series Design matrix Statistical Parametric Map John Ashburner john@fil.ion.ucl.ac.uk Motion Correction Smoothing General Linear Model Smoothing

More information

Mixture Models and EM

Mixture Models and EM Mixture Models and EM Goal: Introduction to probabilistic mixture models and the expectationmaximization (EM) algorithm. Motivation: simultaneous fitting of multiple model instances unsupervised clustering

More information

Brain Extraction, Registration & EPI Distortion Correction

Brain Extraction, Registration & EPI Distortion Correction Brain Extraction, Registration & EPI Distortion Correction What use is Registration? Some common uses of registration: Combining across individuals in group studies: including fmri & diffusion Quantifying

More information

INDEPENDENT COMPONENT ANALYSIS APPLIED TO fmri DATA: A GENERATIVE MODEL FOR VALIDATING RESULTS

INDEPENDENT COMPONENT ANALYSIS APPLIED TO fmri DATA: A GENERATIVE MODEL FOR VALIDATING RESULTS INDEPENDENT COMPONENT ANALYSIS APPLIED TO fmri DATA: A GENERATIVE MODEL FOR VALIDATING RESULTS V. Calhoun 1,2, T. Adali, 2 and G. Pearlson 1 1 Johns Hopkins University Division of Psychiatric Neuro-Imaging,

More information

Hybrid Spline-based Multimodal Registration using a Local Measure for Mutual Information

Hybrid Spline-based Multimodal Registration using a Local Measure for Mutual Information Hybrid Spline-based Multimodal Registration using a Local Measure for Mutual Information Andreas Biesdorf 1, Stefan Wörz 1, Hans-Jürgen Kaiser 2, Karl Rohr 1 1 University of Heidelberg, BIOQUANT, IPMB,

More information

INLA: Integrated Nested Laplace Approximations

INLA: Integrated Nested Laplace Approximations INLA: Integrated Nested Laplace Approximations John Paige Statistics Department University of Washington October 10, 2017 1 The problem Markov Chain Monte Carlo (MCMC) takes too long in many settings.

More information

Chapter 3 Set Redundancy in Magnetic Resonance Brain Images

Chapter 3 Set Redundancy in Magnetic Resonance Brain Images 16 Chapter 3 Set Redundancy in Magnetic Resonance Brain Images 3.1 MRI (magnetic resonance imaging) MRI is a technique of measuring physical structure within the human anatomy. Our proposed research focuses

More information

Norbert Schuff VA Medical Center and UCSF

Norbert Schuff VA Medical Center and UCSF Norbert Schuff Medical Center and UCSF Norbert.schuff@ucsf.edu Medical Imaging Informatics N.Schuff Course # 170.03 Slide 1/67 Objective Learn the principle segmentation techniques Understand the role

More information

Automatic Construction of 3D Statistical Deformation Models Using Non-rigid Registration

Automatic Construction of 3D Statistical Deformation Models Using Non-rigid Registration Automatic Construction of 3D Statistical Deformation Models Using Non-rigid Registration D. Rueckert 1, A.F. Frangi 2,3, and J.A. Schnabel 4 1 Visual Information Processing, Department of Computing, Imperial

More information

Neuroimage Processing

Neuroimage Processing Neuroimage Processing Instructor: Moo K. Chung mkchung@wisc.edu Lecture 2-3. General Linear Models (GLM) Voxel-based Morphometry (VBM) September 11, 2009 What is GLM The general linear model (GLM) is a

More information

Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information

Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information Mustafa Berkay Yilmaz, Hakan Erdogan, Mustafa Unel Sabanci University, Faculty of Engineering and Natural

More information

Supplementary Figure 1. Decoding results broken down for different ROIs

Supplementary Figure 1. Decoding results broken down for different ROIs Supplementary Figure 1 Decoding results broken down for different ROIs Decoding results for areas V1, V2, V3, and V1 V3 combined. (a) Decoded and presented orientations are strongly correlated in areas

More information

Deep Generative Models Variational Autoencoders

Deep Generative Models Variational Autoencoders Deep Generative Models Variational Autoencoders Sudeshna Sarkar 5 April 2017 Generative Nets Generative models that represent probability distributions over multiple variables in some way. Directed Generative

More information

EPI Data Are Acquired Serially. EPI Data Are Acquired Serially 10/23/2011. Functional Connectivity Preprocessing. fmri Preprocessing

EPI Data Are Acquired Serially. EPI Data Are Acquired Serially 10/23/2011. Functional Connectivity Preprocessing. fmri Preprocessing Functional Connectivity Preprocessing Geometric distortion Head motion Geometric distortion Head motion EPI Data Are Acquired Serially EPI Data Are Acquired Serially descending 1 EPI Data Are Acquired

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

Geometric Registration for Deformable Shapes 3.3 Advanced Global Matching

Geometric Registration for Deformable Shapes 3.3 Advanced Global Matching Geometric Registration for Deformable Shapes 3.3 Advanced Global Matching Correlated Correspondences [ASP*04] A Complete Registration System [HAW*08] In this session Advanced Global Matching Some practical

More information

LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION

LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION Ammar Zayouna Richard Comley Daming Shi Middlesex University School of Engineering and Information Sciences Middlesex University, London NW4 4BT, UK A.Zayouna@mdx.ac.uk

More information

Factor Topographic Latent Source Analysis: Factor Analysis for Brain Images

Factor Topographic Latent Source Analysis: Factor Analysis for Brain Images Factor Topographic Latent Source Analysis: Factor Analysis for Brain Images Jeremy R. Manning 1,2, Samuel J. Gershman 3, Kenneth A. Norman 1,3, and David M. Blei 2 1 Princeton Neuroscience Institute, Princeton

More information

Homework. Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression Pod-cast lecture on-line. Next lectures:

Homework. Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression Pod-cast lecture on-line. Next lectures: Homework Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression 3.0-3.2 Pod-cast lecture on-line Next lectures: I posted a rough plan. It is flexible though so please come with suggestions Bayes

More information

Expectation-Maximization Methods in Population Analysis. Robert J. Bauer, Ph.D. ICON plc.

Expectation-Maximization Methods in Population Analysis. Robert J. Bauer, Ph.D. ICON plc. Expectation-Maximization Methods in Population Analysis Robert J. Bauer, Ph.D. ICON plc. 1 Objective The objective of this tutorial is to briefly describe the statistical basis of Expectation-Maximization

More information

Chapter 4. Clustering Core Atoms by Location

Chapter 4. Clustering Core Atoms by Location Chapter 4. Clustering Core Atoms by Location In this chapter, a process for sampling core atoms in space is developed, so that the analytic techniques in section 3C can be applied to local collections

More information

Using Subspace Constraints to Improve Feature Tracking Presented by Bryan Poling. Based on work by Bryan Poling, Gilad Lerman, and Arthur Szlam

Using Subspace Constraints to Improve Feature Tracking Presented by Bryan Poling. Based on work by Bryan Poling, Gilad Lerman, and Arthur Szlam Presented by Based on work by, Gilad Lerman, and Arthur Szlam What is Tracking? Broad Definition Tracking, or Object tracking, is a general term for following some thing through multiple frames of a video

More information

MultiVariate Bayesian (MVB) decoding of brain images

MultiVariate Bayesian (MVB) decoding of brain images MultiVariate Bayesian (MVB) decoding of brain images Alexa Morcom Edinburgh SPM course 2015 With thanks to J. Daunizeau, K. Brodersen for slides stimulus behaviour encoding of sensorial or cognitive state?

More information

Building Classifiers using Bayesian Networks

Building Classifiers using Bayesian Networks Building Classifiers using Bayesian Networks Nir Friedman and Moises Goldszmidt 1997 Presented by Brian Collins and Lukas Seitlinger Paper Summary The Naive Bayes classifier has reasonable performance

More information

Weighted Alternating Least Squares (WALS) for Movie Recommendations) Drew Hodun SCPD. Abstract

Weighted Alternating Least Squares (WALS) for Movie Recommendations) Drew Hodun SCPD. Abstract Weighted Alternating Least Squares (WALS) for Movie Recommendations) Drew Hodun SCPD Abstract There are two common main approaches to ML recommender systems, feedback-based systems and content-based systems.

More information

Non-Rigid Image Registration III

Non-Rigid Image Registration III Non-Rigid Image Registration III CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (CS6240) Non-Rigid Image Registration

More information

An Adaptive Eigenshape Model

An Adaptive Eigenshape Model An Adaptive Eigenshape Model Adam Baumberg and David Hogg School of Computer Studies University of Leeds, Leeds LS2 9JT, U.K. amb@scs.leeds.ac.uk Abstract There has been a great deal of recent interest

More information

Zurich SPM Course Voxel-Based Morphometry. Ged Ridgway (Oxford & UCL) With thanks to John Ashburner and the FIL Methods Group

Zurich SPM Course Voxel-Based Morphometry. Ged Ridgway (Oxford & UCL) With thanks to John Ashburner and the FIL Methods Group Zurich SPM Course 2015 Voxel-Based Morphometry Ged Ridgway (Oxford & UCL) With thanks to John Ashburner and the FIL Methods Group Examples applications of VBM Many scientifically or clinically interesting

More information

Dense Image-based Motion Estimation Algorithms & Optical Flow

Dense Image-based Motion Estimation Algorithms & Optical Flow Dense mage-based Motion Estimation Algorithms & Optical Flow Video A video is a sequence of frames captured at different times The video data is a function of v time (t) v space (x,y) ntroduction to motion

More information

Spatial and multi-scale data assimilation in EO-LDAS. Technical Note for EO-LDAS project/nceo. P. Lewis, UCL NERC NCEO

Spatial and multi-scale data assimilation in EO-LDAS. Technical Note for EO-LDAS project/nceo. P. Lewis, UCL NERC NCEO Spatial and multi-scale data assimilation in EO-LDAS Technical Note for EO-LDAS project/nceo P. Lewis, UCL NERC NCEO Abstract Email: p.lewis@ucl.ac.uk 2 May 2012 In this technical note, spatial data assimilation

More information

FSL Pre-Processing Pipeline

FSL Pre-Processing Pipeline The Art and Pitfalls of fmri Preprocessing FSL Pre-Processing Pipeline Mark Jenkinson FMRIB Centre, University of Oxford FSL Pre-Processing Pipeline Standard pre-processing: Task fmri Resting-state fmri

More information

Bootstrapping Method for 14 June 2016 R. Russell Rhinehart. Bootstrapping

Bootstrapping Method for  14 June 2016 R. Russell Rhinehart. Bootstrapping Bootstrapping Method for www.r3eda.com 14 June 2016 R. Russell Rhinehart Bootstrapping This is extracted from the book, Nonlinear Regression Modeling for Engineering Applications: Modeling, Model Validation,

More information

Learning from Data: Adaptive Basis Functions

Learning from Data: Adaptive Basis Functions Learning from Data: Adaptive Basis Functions November 21, 2005 http://www.anc.ed.ac.uk/ amos/lfd/ Neural Networks Hidden to output layer - a linear parameter model But adapt the features of the model.

More information

Structural MRI analysis

Structural MRI analysis Structural MRI analysis volumetry and voxel-based morphometry cortical thickness measurements structural covariance network mapping Boris Bernhardt, PhD Department of Social Neuroscience, MPI-CBS bernhardt@cbs.mpg.de

More information

Occluded Facial Expression Tracking

Occluded Facial Expression Tracking Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento

More information

The Lucas & Kanade Algorithm

The Lucas & Kanade Algorithm The Lucas & Kanade Algorithm Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Registration, Registration, Registration. Linearizing Registration. Lucas & Kanade Algorithm. 3 Biggest

More information

Integrated Approaches to Non-Rigid Registration in Medical Images

Integrated Approaches to Non-Rigid Registration in Medical Images Work. on Appl. of Comp. Vision, pg 102-108. 1 Integrated Approaches to Non-Rigid Registration in Medical Images Yongmei Wang and Lawrence H. Staib + Departments of Electrical Engineering and Diagnostic

More information

SPM8 for Basic and Clinical Investigators. Preprocessing

SPM8 for Basic and Clinical Investigators. Preprocessing SPM8 for Basic and Clinical Investigators Preprocessing fmri Preprocessing Slice timing correction Geometric distortion correction Head motion correction Temporal filtering Intensity normalization Spatial

More information

Multidimensional scaling

Multidimensional scaling Multidimensional scaling Lecture 5 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 Cinderella 2.0 2 If it doesn t fit,

More information

An Introduction to Markov Chain Monte Carlo

An Introduction to Markov Chain Monte Carlo An Introduction to Markov Chain Monte Carlo Markov Chain Monte Carlo (MCMC) refers to a suite of processes for simulating a posterior distribution based on a random (ie. monte carlo) process. In other

More information

Computer vision: models, learning and inference. Chapter 10 Graphical Models

Computer vision: models, learning and inference. Chapter 10 Graphical Models Computer vision: models, learning and inference Chapter 10 Graphical Models Independence Two variables x 1 and x 2 are independent if their joint probability distribution factorizes as Pr(x 1, x 2 )=Pr(x

More information

Parameter Estimation in Differential Equations: A Numerical Study of Shooting Methods

Parameter Estimation in Differential Equations: A Numerical Study of Shooting Methods Parameter Estimation in Differential Equations: A Numerical Study of Shooting Methods Franz Hamilton Faculty Advisor: Dr Timothy Sauer January 5, 2011 Abstract Differential equation modeling is central

More information

Data mining for neuroimaging data. John Ashburner

Data mining for neuroimaging data. John Ashburner Data mining for neuroimaging data John Ashburner MODELLING The Scientific Process MacKay, David JC. Bayesian interpolation. Neural computation 4, no. 3 (1992): 415-447. Model Selection Search for the best

More information

Transformation Model and Constraints Cause Bias in Statistics on Deformation Fields

Transformation Model and Constraints Cause Bias in Statistics on Deformation Fields Transformation Model and Constraints Cause Bias in Statistics on Deformation Fields Torsten Rohlfing Neuroscience Program, SRI International, Menlo Park, CA, USA torsten@synapse.sri.com http://www.stanford.edu/

More information

Reddit Recommendation System Daniel Poon, Yu Wu, David (Qifan) Zhang CS229, Stanford University December 11 th, 2011

Reddit Recommendation System Daniel Poon, Yu Wu, David (Qifan) Zhang CS229, Stanford University December 11 th, 2011 Reddit Recommendation System Daniel Poon, Yu Wu, David (Qifan) Zhang CS229, Stanford University December 11 th, 2011 1. Introduction Reddit is one of the most popular online social news websites with millions

More information

The Template Update Problem

The Template Update Problem The Template Update Problem Iain Matthews, Takahiro Ishikawa, and Simon Baker The Robotics Institute Carnegie Mellon University Abstract Template tracking dates back to the 1981 Lucas-Kanade algorithm.

More information