3D/2D registration using cores of tubular anatomical structures as a basis. Alan Liu Elizabeth Bullitt Stephen M. Pizer

Similar documents
3D/2D Registration Via Skeletal Near Projective Invariance in Tubular Objects

Edge and local feature detection - 2. Importance of edge detection in computer vision

Probabilistic Tracking and Model-based Segmentation of 3D Tubular Structures

HOUGH TRANSFORM CS 6350 C V

An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy

Automatic Vascular Tree Formation Using the Mahalanobis Distance

COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates

Graphics and Interaction Transformation geometry and homogeneous coordinates

CoE4TN4 Image Processing

ELEC Dr Reji Mathew Electrical Engineering UNSW

Lecture 15: Segmentation (Edge Based, Hough Transform)

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

Stereo Vision. MAN-522 Computer Vision

Modern Medical Image Analysis 8DC00 Exam

Comparison of Vessel Segmentations using STAPLE

CS 231A Computer Vision (Fall 2012) Problem Set 3

MR IMAGE SEGMENTATION

Chapter 4. Clustering Core Atoms by Location

Lecture 13 Theory of Registration. ch. 10 of Insight into Images edited by Terry Yoo, et al. Spring (CMU RI) : BioE 2630 (Pitt)

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Guidelines for proper use of Plate elements

Norbert Schuff VA Medical Center and UCSF

Visual Recognition: Image Formation

Comparative Study of ROI Extraction of Palmprint

Shape-based Diffeomorphic Registration on Hippocampal Surfaces Using Beltrami Holomorphic Flow

Perspective Projection [2 pts]

Subpixel Corner Detection Using Spatial Moment 1)

Biomedical Image Analysis. Point, Edge and Line Detection

Segmentation and Modeling of the Spinal Cord for Reality-based Surgical Simulator

Chapter 11 Arc Extraction and Segmentation

Feature Detectors and Descriptors: Corners, Lines, etc.

RIGID IMAGE REGISTRATION

Using Subspace Constraints to Improve Feature Tracking Presented by Bryan Poling. Based on work by Bryan Poling, Gilad Lerman, and Arthur Szlam

THE preceding chapters were all devoted to the analysis of images and signals which

Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model

Learning-based Neuroimage Registration

Segmentation and Grouping

HOUGH TRANSFORM. Plan for today. Introduction to HT. An image with linear structures. INF 4300 Digital Image Analysis

Adaptive Fuzzy Connectedness-Based Medical Image Segmentation

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS.

Coarse-to-fine image registration

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Motion Estimation and Optical Flow Tracking

calibrated coordinates Linear transformation pixel coordinates

A Multiple-Layer Flexible Mesh Template Matching Method for Nonrigid Registration between a Pelvis Model and CT Images

Silhouette Coherence for Camera Calibration under Circular Motion

Announcements. Edge Detection. An Isotropic Gaussian. Filters are templates. Assignment 2 on tracking due this Friday Midterm: Tuesday, May 3.

A MORPHOLOGY-BASED FILTER STRUCTURE FOR EDGE-ENHANCING SMOOTHING

CS 223B Computer Vision Problem Set 3

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

IRIS SEGMENTATION OF NON-IDEAL IMAGES

Computer Vision I Name : CSE 252A, Fall 2012 Student ID : David Kriegman Assignment #1. (Due date: 10/23/2012) x P. = z

Filtering Images. Contents

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Experiments with Edge Detection using One-dimensional Surface Fitting

EE 584 MACHINE VISION

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

Comparison of Vessel Segmentations Using STAPLE

Obtaining Feature Correspondences

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Topic 6 Representation and Description

Advanced Image Reconstruction Methods for Photoacoustic Tomography

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

Outline 7/2/201011/6/

Curve Subdivision in SE(2)

Scanner Parameter Estimation Using Bilevel Scans of Star Charts

Chapter 3 Image Registration. Chapter 3 Image Registration

3D Geometry and Camera Calibration

Rigid and Deformable Vasculature-to-Image Registration : a Hierarchical Approach

Midterm Exam Solutions

Fingerprint Classification Using Orientation Field Flow Curves

Character Recognition

Chapter 11 Representation & Description

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Chapter 18. Geometric Operations

Math D Printing Group Final Report

Lecture 7: Most Common Edge Detectors

Lecture 8: Registration

Flexible Calibration of a Portable Structured Light System through Surface Plane

Blood Vessel Visualization on CT Data

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Multiple Model Estimation : The EM Algorithm & Applications

Region-based Segmentation

arxiv: v1 [cs.cv] 28 Sep 2018

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10

Correspondence. CS 468 Geometry Processing Algorithms. Maks Ovsjanikov

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Image representation. 1. Introduction

A Non-Linear Image Registration Scheme for Real-Time Liver Ultrasound Tracking using Normalized Gradient Fields

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

Shape Modeling and Geometry Processing

EE795: Computer Vision and Intelligent Systems

Differential Geometry: Circle Patterns (Part 1) [Discrete Conformal Mappinngs via Circle Patterns. Kharevych, Springborn and Schröder]

Image Processing

274 Curves on Surfaces, Lecture 5

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

Transcription:

Title Page 3D/2D registration using cores of tubular anatomical structures as a basis Alan Liu Elizabeth Bullitt Stephen M. Pizer Medical Image Display & Analysis Group University of North Carolina at Chapel Hill Chapel Hill, North Carolina 27599 Abstract The 3D/2D registration problem is, for a specified object, to compute a projection that best matches a given 2D image of that object. 3D/2D registration is an important step in some multimodality visualization applications and in image-guided surgery where the dimensionality of images from different sources are not the same. Previous work in this area has employed fiducial marks or surface structures as a registration basis. While these methods are effective, they can require time consuming preprocessing or user intervention and do not always provide a high level of accuracy. In this paper, we give an alternative method for solving the 3D/2D registration problem that employs non-surface structures as a registration basis. Our method is robust and produces excellent accuracy when tested in a series of experiments. Support This research was supported under NIH grant PO CA47982. Key words 3D/2D registration, image fusion, cores, frameless stereotaxy.

Page 2 of 40 pages. Introduction A general strategy for registration can be divided into three parts: modeling the registration transform, choosing the registration basis and finally selecting an optimization method and a corresponding metric that gives a measure of the goodness of a registration. This section elaborates on each part and provides a brief overview of recent advances. Later in the paper, we show how our registration method compares to other 3D/2D approaches. At a fundamental level, registration involves the transformation of coordinate systems from one image to another so that corresponding points on the object have the same image coordinates. To register 3D computerized tomography (CT) with 3D magnetic resonance imaging (MRI) then, is to derive a transformation that will map the voxels from one study onto the other so that any given point on the object coincides on both images. It is useful to consider the problem in terms of the change of coordinates implied by each imaging process. Thus, Definition An imaging process I is a transformation that maps 3-space into n-space. i.e., I R 3 R n where 2 n 3 and R k is Euclidean k -space. Thus, a 3D imaging process such as CT is a function I R 3 R 3 mapping an object from world coordinates (more precisely, CT scanner coordinates) into voxel coordinates, whereas a plain film x-ray is generated by an imaging process function of the form I R 3 R 2. Note that the latter implies projection, which in a typical 2D x-ray image, can be modeled by some variant of the perspective transform. In this paper, the value of n should be clear from context; where it is ambiguous, the dimensionality will be given.

Page 3 of 40 pages Definition 2 Let I A R 3 R n and I B R 3 R m be imaging processes. The registration of I A to I B is the process of finding a function T R n R m such that TI A x = I B x x R 3. i.e., is registered to when points on objects common to both images coincide. Call T an nd/md I B registration. The following examples help to clarify the above. Consider a CT/MRI volume registration. The patient is scanned under both modalities. In each case, the 3D region of interest is mapped into voxel coordinates. Let and be the respective imaging processes. Thus we I CT I MRI I A may write I CT R 3 R 3 and I MRI R 3 R 3. Finding a registration between the two studies means finding a function T R 3 R 3 such that the images are aligned. Since both the domain and range of T is in R 3, this is an example of 3D/3D registration. Now suppose a 3D volume (e.g., preoperative CT) is to be registered against a 2D image (e.g., intraoperative fluoroscopic image). Let I 3D R 3 R 3 and I 2D R 3 R 2 be the respective imaging processes. Then to find a 3D/2D registration between them is to determine a function T R 3 R 2 satisfying the criteria given in definition 2. Note that definition 2 allows T to be an arbitrary transformation. In general, some assumptions are made about the nature of T. In 3D/3D registration for example, T may be restricted to the class of rigid transformations, as in the case of [24]. In other examples the class involves a rigid transformation composed with linear scaling (e.g., [7], [8]). In 3D/2D registration, T might involve the composition of a 3D coordinate transformation and a projection.

Page 4 of 40 pages In both 3D/3D and 3D/2D registration, situations exist where distortions acquired during imaging are significant or it may be that the 3D image is a generalized atlas. It may then be necessary to have T account for them. For example, in [2] the registration function T allows nonlinear local distortions in 3D/3D registration. Alternatively, T might model a video camera or some other physical device. For example, [4] maps a preoperative MRI scan onto a live video signal. The preoperative image is projected so that it is consistent with the TV camera acquiring the intraoperative image. The basis for registration is the set of image entities used by the algorithm to compute T. These are typically intensities or loci which appear in both images (and thus are known to correspond to the same set of points in R 3 generalized image intensities, landmarks, curves and surfaces. ). Various kinds of bases have been tried, including Correlation of intensities or their sign changes (e.g., [24], [22]) is a long-standing approach to registration. This approach has been generalized to intensities derived from the initial image or even sets of such derived intensities [23]. A promising improvement of this approach employs the information-theoretic measure of mutual information as the metric for registration ([25], [7], [8]). The method requires only very coarse region-of-interest segmentation and in theory, uses every pixel as a basis. However, at the time of writing, we are not aware of this method being used for 3D/2D registration. The remaining methods have bases in the form of image loci: sets of landmarks, curves on surfaces or the surfaces themselves. Many variations of these have been employed in 3D/2D registration. Landmark based 3D/2D registration methods involving stereotactic headholders [3],

Page 5 of 40 pages skin fiducials [8], points located by digitizing wands [2], and anatomic landmarks located in image data have been tried. To produce a registration, a metric such as Procrustes that returns a minimal value when these points are registered is used and some form of optimization method is applied to locate this minimum. These methods lend themselves to rapid registration once the landmarks and their correspondences are defined. Unfortunately, the accuracy of landmark based registration is critically dependent on establishing a rigid correspondence between fiducials and the region of interest and on precisely locating their positions on the image. Automatic definition of artificial landmarks is normally possible, but the relation of these landmarks to the anatomy is sometimes nonrigid, as when they are placed on the skin or when they are well separated from the anatomic target by soft tissue. Stereotactic headholders eliminate this shifting but can be uncomfortable to the patient and are not always favored by the surgeon. In addition, with methods dependent on artificial landmarks or headholders, retrospective registration is not possible. Choosing anatomic landmarks works well in an interactive environment where the input image has good contrast and a high resolution, but the accuracy of selection may be inadequate in noisy or blurred images. Moreover, the interactive selection of an adequate number of landmarks may be time consuming, and the automatic selection of these landmarks, while sometimes possible, is a challenging task of computer vision. Curve and surface based methods have an advantage over landmarks that they in effect encompass many points. They thus reduce the need for precise point placement but are not completely free of this requirement. They have the additional advantage of requiring the specification of only the few curve or surface component correspondences. Curve based methods may require the additional step of establishing point correspondences between curves but this can

Page 6 of 40 pages usually be done automatically. Among various methods used to accomplish this are those based on curvature (e.g., [3]) or proximity (e.g., []). Curves and boundaries have been employed in conjunction with surfaces in 3D/2D registration. For example, [] employs a curve based method for retrospective registration of angiographic and MRA data. [6] has developed a method of 3D/ 2D registration which registers the 2D silhouette of an object against its 3D segmented surface. A signed distance function indicates the goodness of fit, with values closer to zero being better. [4] employs pure surface-based registration in his image-guided surgical application by the use of structured light to deduce a 3D surface from 2D camera images. The cost function is based on computing smallest distance between points on the two surfaces. A local optimization method with multiple initial start points is employed to ensure robust convergence. Surface and curve based 3D/2D registration can provide a good degree of accuracy, but their extraction can be dependent on subjective judgment and may require a fairly timeconsuming preprocessing stage. For example both [6] and [4] require segmentation of the object s surface. Using manual segmentation, delineation of the boundary (and hence registration accuracy) is based on subjective judgement for, at current voxel resolutions, there remains a degree of fuzz around an object s boundary which makes locating edges difficult. Even with automated and semi-automated segmentation methods such as active contours [5], a different choice of parameter settings by the user can cause the segmented portion of the object to change. Moreover, differences in image contrast and noise levels will cause variations in the segmented result and must be manually compensated. In the next section, we present a registration method which addresses many of these problems.

Page 7 of 40 pages 2. Method We begin our discussion by giving an overview of a specific 3D/2D registration problem detailed in [6]. The key aspects of this problem serve to illustrate the main points in the formulation of our algorithm. In our application, we are given a 3D MR angiogram and a 2D x- ray angiogram of the same patient. We are also given all relevant intrinsic parameters of the 2D image such as source to film distance and size of the image plane. We would like to know the location and orientation of the 2D imaging device with respect to voxel coordinates when the 2D image was acquired. 2. Modeling the transform Let, be the imaging processes for the 3D MRA and 2D x-ray respectively. I 3D I 2D Ignoring MR induced distortions for now, is simply an isometry (see definition 4 in section I 3D 2.3). From definition 2, we have =. Since we wish to find the attitude of the x-ray TI 3D I 2D device in voxel coordinates, we may discard I 3D and write T = I 2D. From a modeling standpoint, I 2D R 3 R 2 where I 2D = PE such that P is a perspective transformation and E is the attitude matrix. Replacing I 2D with PE, we have T R 3 R 2 such that T = PE. We may assume P to be given or can be determined by various camera calibration techniques (e.g., [20]). Our algorithm will compute E. 2.2 Registration basis Our algorithm employs cores, a novel basis for registration. Cores are loci of generalized local maxima (ridges) of a medialness function derived from an underlying image. The core has

Page 8 of 40 pages been developed as an object description technique by Pizer and colleagues. Cores describe a simple figure in terms of the position of its medial axis at scales proportional to its width [2]. Here, the notion of width corresponds to the radius of the largest fuzzy circle that fits within a the figure. By a fuzzy circle, we mean a circle with a boundary that has been blurred by a normalized Gaussian whose standard deviation parameter is a function of the circle s radius. Equivalently, we may consider a circle with a hard boundary that best fits a normalized Gaussian blurred instance of the figure, where the degree of blurring is proportional to the radius of the circle. Fig. illustrates. r Fig. : A teardrop shaped object. The dotted line represents the core middle. It is the locus of a set of circles whose fuzzy boundaries are tangent to the object s boundaries. Since cores contain both spatial and width components, they lie in a space one dimension higher than the image from which they are extracted. For example, cores extracted from a 2D image are curves in the scale space R 2 R +. Note that the scale component is positive. Cores are classified according to their dimensionality and to the dimensionality of the source image. Thus, cores extracted from 2D images are classified as D from 2D cores and written as D 2. In higher dimensional spaces, cores are m-dimensional manifolds, where m< n and depends on the figure

Page 9 of 40 pages being extracted. For the particular case of 3-dimensions, objects whose cross section is approximately circular produce D cores while objects with elliptical cross sections produce D 2 3 3 cores. We shall call the former tubular objects. A core middle is the orthogonal projection of a core onto its spatial components; it forms a skeleton. The left image of fig. 2 shows the core middle of a D core extracted from the carotid artery in a 2D image while the right image shows 2 width information. Fig. 2: Anterior-posterior (AP) angiograms. A core has been extracted from the carotid artery (arrow) and its spatial projection displayed on the left image. The right image shows the core s width information. Cores exhibit remarkable stability to noise, blurring and poor contrast [9]. These are situations in which boundary and landmark based algorithms generally do the worst. In addition, different medialness operators can be employed depending on the type of image encountered. A more detailed discussion is deferred until section 5. User interaction is still required for core extraction but is restricted to merely indicating the vicinity where a core can be found. A local search for an initial core point is performed automatically, and once this has been found, the entire core is automatically extracted. Thus, core extraction is completely deterministic even in cases

Page 0 of 40 pages where user-interaction cannot pinpoint the same location in successive repetitions. The degree of user interaction required is also considerably reduced. Excluding the case of self-occlusion, tubular objects with constant cross section have core middles which are invariant to orthographic projection. That is, projecting a 3D tubular object to 2D and generating a D core middle from it yields the same result as projecting the core 2 D 3 middle of the original object. To see this, consider a circle C with radius r and center c. In general, an orthographic projection of C produces an ellipse with major axis of length 2r such that c is projected onto the midpoint of the major axis. Fig. 3 (left) illustrates. Now consider a tube with a circular cross-section of constant radius. We may consider this to be a stack of infinitesimally thin circles such that the tube middle is the set of centers of each circle. An orthographic projection produces a 2D ribbon of width 2r. The middle of the projection is a locus of points such that (ignoring end points) it is at the midpoint of the shortest line connecting the two edges. But this line is precisely the major axis of a circular cross section (fig. 3, right). Thus, the projection of c falls on this locus and hence, the projection of the tube middle falls onto the middle of the tube projection. In practice, this observation remains valid for a perspective projection if the radius of the tube is small compared to its distance from the camera. In situations where the cross section is not constant, the deviation from this observation is small. The following analysis gives an idea of the magnitude of this error. We make the simplifying assumption that figure boundaries are unblurred and the core middle is the loci of circles whose boundary is tangent to at least two points on the figure.

Page of 40 pages r c Fig. 3: Left: orthographic projection of a circle. Right: cross-section of a tube. A general tubular object is defined by the locus of the boundary of a circle moving along a space curve such that its center is on the curve and the normal to the circle is tangent to the curve. The radius of the circle is a function of its distance along the curve. As for the case of a constant width tubular object, the orthographic projection of a general tube is defined by the orthographic projection of its circular cross sections, these projections still appear as ellipses and the projection of the tube middle lies on the midpoint of the major axes of these ellipses. However, it is now no longer necessarily true that the ribbon s (i.e., the projection of the tube) middle is the same as the projected tube middle. The extent to which this differs depends on the local curvature of the boundaries and rate of widening of the ribbon. An example illustrates. Consider a tube that widens only on one side. Fig. 4 shows a projection of such a tube from the side. In this situation, the cross section of the tube projects as a line. The solid line represents the projection of the tube middle. The core middle of the projection is the locus of centers of circles that are touching at least two points on the boundaries of the object. As the figure shows, this locus is different. To analyze the amount of deviation between ribbon middle and projected tube middle, we make the following observations. Locally, the boundaries of a ribbon may be considered to be

Page 2 of 40 pages r r r 2 r 3 r 3 Fig. 4: A tubular object with an abrupt change in width. The middle solid line represents the projection of the tube middle. The dashed line represents the core middle extraction from the tube projection. straight but not necessarily parallel. Let the ribbon boundaries be widening at an angle of θ. Let there be an abrupt change in the width of the boundary within this local neighborhood at points t and such that the new direction of the boundary forms an angle of φ with the old. Consider the t 2 extreme case when φ = 90 degrees. Fig. 5 (left) illustrates our argument. This may be considered to be a worst case since in practice, it is unlikely any tubular anatomical object will exhibit such behavior. The boundaries may be considered as planar curves. Call these curves a and b. The curvature function of a and b is everywhere zero except at the point of widening. That is, a plot of the curvature function for a and b produces a δ or point function. Recall that a core is the loci of centers of circles that best fit a normalized, Gaussian blurred instance of the object, where the degree of blurring is proportional to the circle s radius. Let σ be the degree of blurring, then the blurring effect limits the maximum value of the curvature function possible for a and b. A reasonable approximation of this limit can be obtained by convolving the curvature function with a D normalized Gaussian of standard deviation σ. Since the curvature is a δ function, its

Page 3 of 40 pages convolution with a normalized Gaussian produces an identity transformation of the Gaussian. Thus, the maximum curvature for blurred versions of a and b is --------------. Fig. 5 (right) illustrates 2πσ the extent of error possible using this estimate of maximum curvature. Locally, the ribbon boundary is straight except for a change in direction at (a similar change occurs for the other t boundary, which is not shown for clarity). The ellipse represents the projection of a circular cross section after this deviation and its midpoint is the projection of the tube middle for this cross c section. The center of the circle (given by ) gives the location of the D core middle. In the c 2 worst case, the length of c c 2 is equal to p p 2. A straightforward trigonometrical computation 2 gives rtanφtanθ p p 2 = ------------------------- = cosθ rtanφsinθ ------------------------. Equation () sin 2 θ Since tanφ --------------, we may write 2πσ r sinθ c c 2 -------------- ---------------------. Equation (2) 2πσ sin 2 θ As discussed previously, r and σ are related by a constant of proportionality, i.e. σ = kr. In our implementation, k is normally 0.5, so we have 2 sinθ c c 2 -- ---------------------. Equation (3) π sin 2 θ

Page 4 of 40 pages We can reasonably expect that even for the most severe cases, θ 35 degrees, in which case c c 2 < 0.532 pixels or about a half pixel error. The reader is reminded that this value is for an instance where φ = 90 degrees. That is, when the walls of the tube abruptly make a 90 degree turn from its initial direction. In typical cases, tubular anatomical objects such as blood vessels have an approximately constant or slowly growing width (e.g., θ 0 degrees) and do not widen 2 abruptly (i.e., tanφ «-- ), so we have c. That is, the error is considerably less than π c 2 «0.43 of a pixel. -- 5 φ p t θ φ p 2 t θ θ c 2 θ r θ c c 2 t 2 Fig. 5: Left: Thick lines represent the projection boundary. The D2 core point is located at the center of the circle, denoted y c 2. Here, φ = 90 degrees. Right: The effect of blurring is to restrict the value of φ. The maximum deviation of a core point from the tube middle (given by the line segment c c 2 ) can be computed given θ, φ and r. Thus, if we are careful to avoid areas of self occlusion, core middles provide us with a basis for registering tubular objects against their projections. This basis reduces the problem of registration between images to one of registration between sets of curves if a sufficient number of tubular objects are available.

Page 5 of 40 pages 2.2. Core extraction The algorithms describing a method for extracting D and cores can be found in 2 D 3 [0]. In the special case in which tubular objects have good contrast with their surroundings and possess fairly uniform intensities, a method for approximating cores using intensity ridges can be employed [2]. This method is significantly faster than true core computation and produces an acceptable approximation to true cores. Using this method, it takes less than 5 seconds on an HP 75/00 to extract a tubular object of approximately 250 voxels in length. Fig. 6 shows the cores extracted from a 3D MR angiogram (MRA) study of the head using this algorithm. The operator took approximately 20 minutes of user time on a DECStation 25 to extract more than 00 vessels. Applying the core extraction process to both 2D and 3D images then taking their core middles yield a set of 2D and 3D curves respectively. Curves allow a computationally appealing and robust optimization method to be employed. The following section describes how these two sets of curves are registered. Fig. 6: Left image shows core middles extracted from a 3D MRA study of the head. Right image is a 3D rendering of core middles with width information included. The red wireframe structure in the middle is the ventricle, shown for orientation purposes.

Page 6 of 40 pages 2.3 Curve-based 3D/2D registration algorithm Since we are dealing with images taken from the same object, a 2D curve will have a corresponding 3D counterpart. Note that the special case of a curve seen end-on results in a point; we may consider this to be a constant curve, i.e. a curve such that β ( s) = p s where p is a constant. A 2D/3D curve pair may have endpoints that do not necessarily coincide. It is thus necessary for the registration algorithm to take this into consideration. One method is to determine the common segments on each corresponding 2D/3D curve pair. Given a perfect registration, this task is trivial since we can project the 3D curve and then do a pointwise match with its 2D counterpart. Since we do not have this perfect situation, our algorithm deals with this lack by adopting an iterative two-pass approach using a paradigm similar to that of [4], [26] and [5]. At the beginning of each pass, we have an estimate of the true registration (in the first pass, the user provides an approximate guess). An alignment phase uses this guess to extract a list of common points for each pair of curves (Alignments are discussed in greater detail in section 2.4.3). A local gradient descent phase uses these points to refine the current estimate of the registration value. The first phase is then called again to provide a better alignment and the algorithm iterates until it converges to a satisfactory solution as determined by a suitable metric. The method of gradient descent we use is similar to Lowe s algorithm [7] but we employ a different method of expressing the registration transformation which is better suited for our purpose and is slightly more complex than Lowe s. Another variant is also mentioned in the appendix of [22] but details of its derivation are not given. The remainder of this section gives a

Page 7 of 40 pages precise statement of the problem the algorithm is to solve and describes the details of the algorithm. Definition 3 Let S denote a set of curves in 3-space. i.e., S = { α j ( s) j = 2,, }, where α j s R 3 is a C 3 or greater curve.. Definition 4 Let = PE be a perspective projection from R 3 to R 2 such that P R 3 R 2 and E is T p an orientation preserving isometry. i.e., the orthogonal component of E has a positive determinant. The term extrinsic will also be used to refer to E. As mentioned in the introduction, is intended to model an x-ray device that produces 2D images. As a first-order approximation of such a device, we write T p P( x, y, z) = xf ---, yf --- z z Equation (4) i.e., the x-ray camera performs a perspective transformation with focal length f. Parameters controlling the orientation and location of the x-ray camera are described by E. A more accurate model of the physical system must consider other factors image skew, aspect ratio and optical center among others. It is the goal of the algorithm to compute E. For the purpose of this discussion we assume an ideal camera. i.e., P is given by equation 4.

Page 8 of 40 pages Definition 5 Let S( T p ) = { γ j γ j = T p α j α j S} be the set of 2D planar projections under T p of curves in A. γ j is assumed to be C 3 of greater. Definition 6 Let A denote a set of planar curves. i.e., A = { β k ( s) k = 2,, } where β k s R 2 is a or greater curve that is noisy and a possibly incomplete projection of a curve in S. C 3 Given the above, the problem may be stated as follows: given S, P and A, compute E (and hence T p ) such that the set A matches S( T p ) as closely as possible when using the metric given in definition 9, section 2.4.3. 2.4 Establishing Curve Correspondence Definition 7 A correspondence function ka { 23 ℵ,,, + } associates a curve β A with the index number of a curve in S and S( T p ). We use the correspondence function to relate curves in A with their 3D counterparts in S. An example will make this clearer. Recall from definition 3 that S is an indexed set. i.e., each element α in S has a unique index value. Suppose the counterpart of β i in A is α j in S. Then k ( β i ) = j and we say that ( β i, α k ( βj )) form a correspondence pair. k can be used to express correspondence between A and S( T p ) as well. From definition 5, we note that curves in S( T p ) have an identity correspondence with S. i.e.

Page 9 of 40 pages ( α l, γ m ) S S( T p ) is a correspondence pair iff l = m l { 2,,, S }, m { 2,,, S( T p ) }. Thus, we may write ( β i, γ k ( βi) ) iff ( β i, α k ( βi )). Where it is unambiguous, we relax the notation somewhat and write k ( β i ) = α j or k ( β i ) = γ j. In our present implementation, k is defined interactively. An automated means of establishing curve correspondence based on spatial proximity is also possible and will be implemented in a future implementation. 2.4. Algorithm description The algorithm takes as input ( APSE,,, 0, k), where E 0 is an estimate of E, k is a curve correspondence function and the other variables are as defined previously. Given the initial estimate of E, the algorithm iteratively refines this estimate. Each iteration goes through two E 0 phases, computing the curve alignment and refining the estimate to E. The algorithm terminates when a mean square distance metric comparing A with S( T p ) falls below a threshold.. set E E 0 2. while threshold not reached 3. 4. T p PE Compute S( T p ) 5. for each pair β j γ k ( βj ), 6. establish curve alignment function f j between β j and γ k ( βj ). 7. establish curve alignment between and α k βj by transitivity with γ k ( βj ). 8. end for 9. update Eusing ( ASPEk,,,,, { f j }) as input. 0. end while β j ( )

Page 20 of 40 pages 2.4.2 Establishing curve correspondence Lines 5 to 7 of the algorithm require that each curve in A be placed in correspondence with curves in S( T p ) and S. The function k defines this pairing. 2.4.3 Establishing curve alignment Given a pairing ( β i, γ k ( βi )), an alignment function fd R is one that has the property β i ( s) = γ k ( βi ) ( f( s) ). Note that f does not satisfy the true mathematical definition of a function in that it may not be defined for all points in D. In our implementation, we choose a proximity-based approach. This method depends on spatial proximity between curve projections. It has been found to be robust under conditions of noise and in pairings where the curves are of different length. Let ( β, γ) be a curve pairing, where both β and γ are planar curves. Let the Frenet apparatus be ( T β, N β, B β, κ β ) and ( T γ, N γ, B γ, κ γ ) respectively (note that torsion τ is 0 for planar curves). Definition 8 Let ( βγ, ) A S( T p ) be a correspondence pair. Let p β, ρ γ β γ. The perpendicular point ( p) on γ is the closest point to p when measured along the line passing ρ γ through p and parallel to N β ( p), Fig. 7 illustrates. Proximity alignment begins with the assumption that β and γ are spatially close to each other. Using proximity alignment, f can thus be defined as f D R such that γ ( f( s) ) = ρ γ ( β ( s) ). In practice, f is computed for discrete values by taking a set of

Page 2 of 40 pages p T β β N β γ ρ γ ( p) Fig. 7: The perpendicular point ρ γ ( p) points { p, p 2,, p n } on β, evaluating ρ γ for each p, p 2,, p n and then computing the corresponding parameterization of γ for ρ γ ( p ), ρ γ ( p 2 ),, ρ γ ( p n ). This simple function works well even when the curves are not close to each other if simple consistency checks are imposed, such as not accepting any point pairing if the distance between p and ρ γ ( p) is greater than two standard deviations from the average distance between all points q β and ρ γ ( q). registration is. Definition 9 The perpendicular point can also be used in providing a measure of how accurate a β ( s) ρ Define the distance measure M( A, S( T p )) to be k ( β) ( β ( s) ) 2, s β A ------------------------------------------------------------ l ( β) where k is the correspondence function (here we use the informal assumption that ka S( T p ) ) and l ( β) is the arc length function for β. In words, M is the mean square value of the perpendicular distance between points on β and its paired curve γ summed over all β A.

Page 22 of 40 pages In practice, the distance measure is computed at discrete points along β. i.e., let B β = { β ( s ), β( ( s 2 ),, β( s n ))} be a set of n uniformly distributed points on the curve β. p ρ k ( β) ( p) Then M( A, S( T p )) = p B β where is the cardinality of. ----------------------------------------------- B β B β β A B β 2.4.4 Refining E As in the case for the correspondence function, since S( T p ) is a projection of S, the alignment between α S and γ S( T p ) is given by the identity function. Thus, given an alignment function for each curve pairing ( βγ, ) A S( T p ). The alignment function for ( βα, ) A S, where γ = T p α is also known. At this point, a relation between projected points and their 3D counterparts w has w Tp been established. This can be used to compute the extrinsic E. The method developed is based on an improvement to Lowe s algorithm [7]. A improvement very similar to the method described below is suggested in the appendix of [9] but no details as regards its derivation are given. Let = T p w where w R 3, T p = PE and E is an orientation preserving isometry. w Tp E may also be viewed as a function E R 3 R 3 such that E( x) = R( x t), where R R 3 R 3 is an orthogonal transformation that is orientation preserving (and hence, excludes reflections) and t R 3 is a translation vector. Since R may be expressed as the composition of

Page 23 of 40 pages rotations φ x, φ y, φ z about the x,y and z axes respectively, we can write E ( φ x, φ y, φ z, t x, t y, t z ). Since P is given, we have T p ( φ x, φ y, φ z, t x, t y, t z ). Writing as a first order Taylor series about T P0 = PE 0 yields T P = T P0 + dt p ( φ x, φ y, φ z, t x, t y, t z ) Equation (5) where dt P is the Jacobian of partial derivatives of T p ( φ x, φ y, φ z, t x, t y, t z ) at 0. Now, suppose we wish to determine E and are given w, w Tp, P and E 0 such that w Tp PE 0 w ε then since is close to E, E 0 w Tp = T p w = ( T P0 + dt p ( φ x, φ y, φ z, t x, t y, t z ))w Thus, w Tp T P0 w = dt p ( φ x, φ y, φ z, t x, t y, t z )w Equation (6) Since a correct registration will put directly over w, this reduces to w TP dt p ( φ x, φ y, φ z, t x, t y, t z )w = 0 Equation (7) This is essentially one step of Newton s method with multiple variables. As with Newton s method, iteration to a satisfactory convergence is required in situations where PE 0 w is w Tp within a small neighborhood of 0. Two remarks can be made about the discussion up to this point.

Page 24 of 40 pages First, the correction vector ( φ x, φ y, φ z, t x, t y, t z ) can be expressed as an extrinsic, say E. Second, if the correction vector is small, then T p = PE E 0. Let E = E E 0, then T P = PE. Thus, successive iterations produce T P2, T P3,, T Pn and E 2, E 3,, E n where T Pn = PE n and E i + = E i E i E 0. When to halt the process may be determined by using some termination criterion (e.g., when T Pn w ε or by setting an upper bound on the w Tp number of iterations to compute). On termination, gives the best approximation to E. E n For reasons of stability and minimization of perturbations due to noise and other factors, a set of points { w, w 2,, w m } is used in conjunction with a mean square method to solve equation 7. 2.4.5 Computing dt p Recall that ( x) = PEx, where x R 3 and P R 3 R 2 is a projection such that T p P( x, y, z) fx ---, fy for some constant. Writing yields. z --- fχ x = z f χ = Ex T p ( uv, ) ------ fχ y = =, ------ χ z χ z The partial derivatives of can be expressed as T p d( T p ) ----------------- u = f dµ ---- dχ x -------- ----- dχ z ------- dµ χ2 z dµ χ z χ x d( T p ) v ----------------- f ---- dχ y χ -------- ----- dχ y z = ------- dµ dµ χ2 z dµ χ z Equation (8) Equation (9) Equation (0)

Page 25 of 40 pages for µ = { φ x, φ y, φ z, t x, t y, t z }. Now χ = Ex = R ( x t) where x R 3 and R is the composition of three rotations such that R = cos φ y cos φ z cosφ y sinφ y sinφ x sinφ y + cosφ x sinφ x sinφ y + cosφ x sinφ x cosφ y cosφ x sinφ y + sinφ x cosφ x sinφ y + sinφ x cosφ x cosφ y Taking partial derivatives of χ, we have Equation () dχ -------- x = 0 dφ x Equation (2) dχ -------- x = sinφ dφ y ( x x t x ) + sinφ y ( x y t y ) + cosφ y ( x z t z ) x dχ -------- x = cosφ dφ y ( x x t x ) cosφ y ( x y t y ) x dχ -------- x = cosφ dt y x dχ -------- x = cosφ dt y y dχ -------- x = sinχ dt y z Equation (3) Equation (4) Equation (5) Equation (6) Equation (7) dχ -------- y = ( cosφ dφ x sinφ y sinφ x ) ( x x t x ) x ( sinφ x cosφ x sinφ y ) x y t y ( ) cosφ x cosφ y ( x z t z ) Equation (8) dχ -------- y = sinφ dφ x cosφ y ( x x t x ) y

Page 26 of 40 pages sinφ x cosφ y ( x y t y ) + sinφ x sinφ y ( x z t z ) Equation (9) dχ -------- y = ( sinφ dφ x sinφ y + cosφ x ) ( x x t x ) z + cosφ x sinφ x sinφ y ( ) ( x y t y ) Equation (20) dχ -------- y dt x = sinφ x sinφ y cosφ x dχ -------- y dt y = cosφ x + sinφ x sinφ y dχ -------- y dt z = sinφ x cosφ y Equation (2) Equation (22) Equation (23) dχ ------- z = ( cosφ dφ x + sinφ x sinφ y ) ( x x t x ) x + ( sinφ x sinφ y + cosφ x ) ( x y t y ) sinφ x cosφ y x x t z ( ) Equation (24) dχ ------- z dφ y = cosφ x cosφ y ( x x t x ) + cosφ x cosφ y ( x y t y ) cosφ x sinφ y ( x x t z ) Equation (25) dχ ------- z = ( sinφ dφ x + cosφ x sinφ y ) ( x x t x ) z + ( cosφ x sinφ y sinφ x ) ( x y t y ) Equation (26) dχ ------- z = cosφ dt x sinφ y sinφ x x dχ ------- z = cosφ dt x sinφ y sinφ x y Equation (27) Equation (28)

Page 27 of 40 pages dχ ------- z = cosφ dt x cosφ y z Equation (29) E 0 can be treated as the initial coordinate frame, allowing us to evaluate the partial derivatives at 0 to produce the following tables. dφ x dφ y dφ z dt x dt y dt z χ x χ y χ z 0 z -y - 0 0 -z 0 x 0-0 y -x 0 0 0 - Table : Partial derivatives of χ evaluated at 0 u v t x f - 0 z t y 0 f - z t z fx ---- z 2 fy ---- z 2 φ x ---------- fxy f y + ---- 2 z 2 z 2 φ y f x + ---- 2 fxy ------ z 2 z 2 φ z fy ------- z fx --- z Table 2: Partial derivatives of u and v evaluated at 0

Page 28 of 40 pages 3. Experiments A series of experiments were conducted to evaluate the accuracy and robustness of our algorithm. Two 3D studies were obtained. Study A is a 256 256 6 3D MRI scan of the head with voxel size 0.078 0.078 0.30 cm. Study B is a 256 256 48 3D MRI scan of the head from a different patient with voxel sizes of 0.0062 0.0062 0.00 cm. For both Study A and Study B, a set of D vessels are extracted using width-augmented intensity ridges. 204 3 D 3 vessels were extracted from Study A and 09 D vessels from study B. A perspective projection 3 is then applied to the segmented vessels to produce simulated angiograms. Curves were then extracted from the simulated angiograms and used together with the 3D curves as input to the algorithm. In the experiments, the studies were projected to 2D using a known extrinsic (call this E k ) to produce a 2D image. This method allows us to compare the computed value of the extrinsic (call this ) with the true value during accuracy trials. By varying parameters such E c E k as number of curves used and the extent to which differed from, we are able to evaluate the performance of our algorithm. 3. Evaluation of registration accuracy Let P = { r r α, α S} be a set of evenly spaced points along each 3D curve. Let E c E k P k = E k P, P c = E c P. i.e., the set of points P are transformed by the actual and computed extrinsic respectively. We require points in P k and P c to be paired. i.e., if p i P k and q i P c, then p i = E k r i and q i = E c r i, where r i P. Let d i = p i q i be the Euclidean distance of each pair of points. If E k = E c, then d i = 0 i { 23,,,, P }. If E k E c, at least some of the

Page 29 of 40 pages distance values will be nonzero. By evaluating the maximum, minimum, average and standard deviation of these distances, it will be possible to quantify the extent by which. 3.2 Evaluation of spatial distribution We would like to know how the behavior of our algorithm varies for different choices of basis curves. We would expect curves that are well separated spatially to work better than curves that are closely clustered. At the same time, we would expect a better registration with a large number of curves than with a small number. In order to quantify our notion of spread or spatial distribution, we choose to compute the determinant of the second order central moments of regularly spaced points along both 3D and 2D curves as our measure. That is, let the set P be as defined in section 3.. Define Q = { w w β, β A} to be a set of evenly spaced points along each 2D curve. Then the 2D measure is given by the determinant of the 2 2 matrix E k E c µ 20 µ where µ i ij w x w j = x w y w y, i+ j = 2, w = ------ w while the µ µ Q 02 w Q w Q µ 200 µ 0 µ 0 3D measure is given by the determinant of µ 0 µ 020 µ 0 such that µ 0 µ 0 µ 002 µ i ijk r x r j x r y r k = y r z r z, i+ j + k = 2 and r = ----- r. P r P r P

Page 30 of 40 pages 3.3 Experiment : 3D/2D registration of simulated angiographic data This experiment is a general test of the algorithm in an interactive environment. Three such angiograms were generated, one from Study A and two from Study B, each with different extrinsics. The particular views chosen had a large degree of overlap between basis objects in projection. 3 D cores were extracted from each angiogram for registration. To ensure fairness, 2 the individual producing the angiograms was different from the individual performing the registration. The value of was not communicated between them until after the experiment. E 0 Fig. 8 shows the angiograms. Fig. 8: Simulated angiogram images used for registration experiments. 3.4 Experiment 2: Test sensitivity to choice of initial approximation A good registration algorithm should be relatively insensitive to the choice of initial approximation of the extrinsic. This experiment evaluates the algorithm in this respect. For each angiogram in experiment, an initial approximation to the extrinsic was generated by taking the actual solution and applying a set of random transformations and rotations within a given range. Various ranges were tried and for each range, 50 trials were performed.

Page 3 of 40 pages 3.5 Experiment 3: Test of sensitivity to choice of basis We would expect that increasing the number and spread of basis curves will improve registration accuracy. Depending on the particular image, it is also likely that a carefully chosen basis with fewer curves in total may result in better or equal performance than a bad choice with more curves. To test our supposition, we conduct a series of runs on both Study A and Study B with the initial set of 3 2D cores obtained in experiment. We choose random subsets of these 3 cores and perform a registration on them, keeping all other factors such as the initial approximation and termination threshold the same. We perform this experiment using 50 random subsets each of 4, 8 and 0 cores chosen from the original 3. 4. Results 4. Experiment Table 3 gives the results of experiment using the evaluation method described in section 3.. Note that in all cases, the maximum displacement is less than a voxel. Angiogram # Results after registration max. displacement vector (cm) max. displacement distance (cm) ( 0.03, 0.05, 0.07) 0.08 2 ( 0.05, 0.05, 0.07)) 0.09 3 ( 0.06, 0.03, 0.02) 0.06 Table 3: Results of experiment

Page 32 of 40 pages 4.2 Experiment 2 Fig. 9 shows the results of this experiment. The x-axis of each scatterplot gives the initial minimum displacement using the accuracy measure of section 3.. That is, points along the set of 3D curves are displaced by a distance at least that given by the x-axis. The y-axis gives the final maximum displacement after registration. That is, points along the 3D curves are at most this distance away from their true position. For both Study A and B, final displacement is within one voxel for minimum initial displacements of up to 7cm. Given than the longest dimension of each study is at most 20 cm, this implies that the initial starting displacement can be at least 35% of the longest dimension. Even at distances greater than 7cm, a significant portion of the test cases still converge to solutions with a maximum displacement error of less than one voxel. It can be seen from the scatterplots that where convergence occurs, the residual error is less than one voxel. This suggest that with interactive assistance to reject clearly divergent solutions, it is possible to obtained good results with initial displacements considerably greater than 7cm. 4.3 Experiment 3 Fig. 0 shows the results of experiment 3 on Study A. The figure shows a series of scatterplots. The x-axis giving the moment of inertia measure and the y-axis gives the maximum final displacement after registration. The left and right columns contain plots for 2D and 3D moments of inertia respectively. Increasing the number of basis curves increases our measure of spread, which improves the registration. When the subset is too small (top row), registration is equally likely to succeed as it is to fail. Increasing the subset gives a more consistent success rate. The results of this experiment suggests that it is possible to get a good registration with a good choice of as few as eight basis curves.

Page 33 of 40 pages 000 000 00 00 0 0 max. final displacement (cm) 0. max. final displacement (cm) 0. 0.0 0.0 0.00 0.00 0.000 2 3 4 5 6 7 min. initial displacement (cm) 0.000 5 0 5 20 25 30 35 40 45 min. initial displacement (cm) 000 000 00 00 0 0 max. final displacement (cm) 0. 0.0 max. final displacement (cm) 0. 0.0 0.00 0.00 0.000 2 3 4 5 6 7 min. initial displacement (cm) 0.000 5 0 5 20 25 30 35 40 45 min. initial displacement (cm) Fig. 9: Results of experiment 2 for Study A (top row) and B (bottom row) displayed as scatterplots. In both experiments, convergence occurred in every instance for initial displacements of less than 0 cm.

Page 34 of 40 pages 000 000 3 choose 4 3 choose 4 00 00 0 0 max. final displacement (cm) 0. max. final displacement (cm) 0. 0.0 0.0 0.00 0.00 0.000 0 00 000 0.000 00000 e+06 e+07 e+08 e+09 2D Moment of Inertia 3D Moment of Inertia 000 000 3 choose 8 3 choose 8 00 00 0 0 max. final displacement (cm) 0. max. final displacement (cm) 0. 0.0 0.0 0.00 0.00 0.000 00 000 0000 0.000 e+08 e+09 e+0 2D Moment of Inertia 3D Moment of Inertia 000 000 3 choose 0 3 choose 0 00 00 0 0 max. final displacement (cm) 0. max. final displacement (cm) 0. 0.0 0.0 0.00 0.00 0.000 000 0000 0.000 e+08 e+09 e+0 2D Moment of Inertia 3D Moment of Inertia Fig. 0: Results of experiment 3 on study A. Increasing the number of basis curves improves the registration. Fig. shows the results of experiment 3 on study B. As before, increasing the number of basis curves improves registration. For this particular study, there is a significant correlation between moment of inertia and degree of accuracy when the subset is small (top row). This suggests that by choosing basis curves with care, it is possible to do a good registration by using as few as four cores.

Page 35 of 40 pages 000 000 3 choose 4 3 choose 4 00 00 0 0 max. final displacement (cm) 0. max. final displacement (cm) 0. 0.0 0.0 0.00 0.00 0.000 0.0 0. 0 00 000 0.000 00 000 0000 00000 e+06 e+07 e+08 e+09 2D Moment of Inertia 3D Moment of Inertia 000 000 3 choose 8 3 choose 8 00 00 0 0 max. final displacement (cm) 0. max. final displacement (cm) 0. 0.0 0.0 0.00 0.00 0.000 0 00 000 0.000 e+08 e+09 e+0 2D Moment of Inertia 3D Moment of Inertia 000 000 3 choose 0 3 choose 0 00 00 0 0 max. final displacement (cm) 0. max. final displacement (cm) 0. 0.0 0.0 0.00 0.00 0.000 00 000 0.000 e+08 e+09 e+0 2D Moment of Inertia 3D Moment of Inertia Fig. : Results of experiment 3, Study B. When performing registration using four curves, a better spread gives better performance. Increasing the number of curves improves performance. 5. Discussion The strengths of our method lie in the choice of registration basis. Landmark based registration methods generally operate on a limited number of points. Curves can be considered as -dimensional point sets. The number of points available make it more stable against outliers and random perturbations. At the same time, curves are mathematically simpler objects than surfaces

Page 36 of 40 pages and algorithms developed for curves are computationally less complex. As a result, curve-based registration algorithms can be made to run very efficiently and take less time to reach a solution without sacrificing accuracy and stability. Cores share with methods based on surfaces and surface curves in that they allow the automatic determination of pointwise correspondence based on curvature or proximity. This ability makes registration using all these bases insensitive to the ending or breaking of the curves or surfaces. Cores provide the additional advantage that correspondence can be based on width or rate of width change of the figures that they represent. An additional advantage of cores is the ease with which they can be interactively extracted from the image data. A single point and click suffices to select the object figure of interest, and the core is then automatically extracted. Fully automatic core extraction can be based on models incorporating not only figural shape but interfigural relations and boundary/core relations. This is in sharp contrast to manual segmentation and landmark based systems, where user interaction is essential for precise placement. Compared with semi-automated active contour methods, cores require only one user-specified parameter: the approximate location and scale of the starting point. Moreover, the input parameter is independent of image quality. Since this parameter is merely a guess to initialize the algorithm, the accuracy is not important. In contrast, semiautomated segmentation algorithms employing active contours or a similar paradigm require a balance between several user specified parameters that is image specific. The resultant segmentation is also noticeably affected by initial user settings. Curve based registration is not new, and their advantages as well as shortcomings have been well documented. However, our choice of using core middles as curves eliminates many of the problems that have traditionally plagued curve based registration. Cores operate at the scale of