3D/2D registration using cores of tubular anatomical structures as a basis. Alan Liu Elizabeth Bullitt Stephen M. Pizer

Size: px
Start display at page:

Download "3D/2D registration using cores of tubular anatomical structures as a basis. Alan Liu Elizabeth Bullitt Stephen M. Pizer"

Transcription

1 Title Page 3D/2D registration using cores of tubular anatomical structures as a basis Alan Liu Elizabeth Bullitt Stephen M. Pizer Medical Image Display & Analysis Group University of North Carolina at Chapel Hill Chapel Hill, North Carolina Abstract The 3D/2D registration problem is, for a specified object, to compute a projection that best matches a given 2D image of that object. 3D/2D registration is an important step in some multimodality visualization applications and in image-guided surgery where the dimensionality of images from different sources are not the same. Previous work in this area has employed fiducial marks or surface structures as a registration basis. While these methods are effective, they can require time consuming preprocessing or user intervention and do not always provide a high level of accuracy. In this paper, we give an alternative method for solving the 3D/2D registration problem that employs non-surface structures as a registration basis. Our method is robust and produces excellent accuracy when tested in a series of experiments. Support This research was supported under NIH grant PO CA Key words 3D/2D registration, image fusion, cores, frameless stereotaxy.

2 Page 2 of 40 pages. Introduction A general strategy for registration can be divided into three parts: modeling the registration transform, choosing the registration basis and finally selecting an optimization method and a corresponding metric that gives a measure of the goodness of a registration. This section elaborates on each part and provides a brief overview of recent advances. Later in the paper, we show how our registration method compares to other 3D/2D approaches. At a fundamental level, registration involves the transformation of coordinate systems from one image to another so that corresponding points on the object have the same image coordinates. To register 3D computerized tomography (CT) with 3D magnetic resonance imaging (MRI) then, is to derive a transformation that will map the voxels from one study onto the other so that any given point on the object coincides on both images. It is useful to consider the problem in terms of the change of coordinates implied by each imaging process. Thus, Definition An imaging process I is a transformation that maps 3-space into n-space. i.e., I R 3 R n where 2 n 3 and R k is Euclidean k -space. Thus, a 3D imaging process such as CT is a function I R 3 R 3 mapping an object from world coordinates (more precisely, CT scanner coordinates) into voxel coordinates, whereas a plain film x-ray is generated by an imaging process function of the form I R 3 R 2. Note that the latter implies projection, which in a typical 2D x-ray image, can be modeled by some variant of the perspective transform. In this paper, the value of n should be clear from context; where it is ambiguous, the dimensionality will be given.

3 Page 3 of 40 pages Definition 2 Let I A R 3 R n and I B R 3 R m be imaging processes. The registration of I A to I B is the process of finding a function T R n R m such that TI A x = I B x x R 3. i.e., is registered to when points on objects common to both images coincide. Call T an nd/md I B registration. The following examples help to clarify the above. Consider a CT/MRI volume registration. The patient is scanned under both modalities. In each case, the 3D region of interest is mapped into voxel coordinates. Let and be the respective imaging processes. Thus we I CT I MRI I A may write I CT R 3 R 3 and I MRI R 3 R 3. Finding a registration between the two studies means finding a function T R 3 R 3 such that the images are aligned. Since both the domain and range of T is in R 3, this is an example of 3D/3D registration. Now suppose a 3D volume (e.g., preoperative CT) is to be registered against a 2D image (e.g., intraoperative fluoroscopic image). Let I 3D R 3 R 3 and I 2D R 3 R 2 be the respective imaging processes. Then to find a 3D/2D registration between them is to determine a function T R 3 R 2 satisfying the criteria given in definition 2. Note that definition 2 allows T to be an arbitrary transformation. In general, some assumptions are made about the nature of T. In 3D/3D registration for example, T may be restricted to the class of rigid transformations, as in the case of [24]. In other examples the class involves a rigid transformation composed with linear scaling (e.g., [7], [8]). In 3D/2D registration, T might involve the composition of a 3D coordinate transformation and a projection.

4 Page 4 of 40 pages In both 3D/3D and 3D/2D registration, situations exist where distortions acquired during imaging are significant or it may be that the 3D image is a generalized atlas. It may then be necessary to have T account for them. For example, in [2] the registration function T allows nonlinear local distortions in 3D/3D registration. Alternatively, T might model a video camera or some other physical device. For example, [4] maps a preoperative MRI scan onto a live video signal. The preoperative image is projected so that it is consistent with the TV camera acquiring the intraoperative image. The basis for registration is the set of image entities used by the algorithm to compute T. These are typically intensities or loci which appear in both images (and thus are known to correspond to the same set of points in R 3 generalized image intensities, landmarks, curves and surfaces. ). Various kinds of bases have been tried, including Correlation of intensities or their sign changes (e.g., [24], [22]) is a long-standing approach to registration. This approach has been generalized to intensities derived from the initial image or even sets of such derived intensities [23]. A promising improvement of this approach employs the information-theoretic measure of mutual information as the metric for registration ([25], [7], [8]). The method requires only very coarse region-of-interest segmentation and in theory, uses every pixel as a basis. However, at the time of writing, we are not aware of this method being used for 3D/2D registration. The remaining methods have bases in the form of image loci: sets of landmarks, curves on surfaces or the surfaces themselves. Many variations of these have been employed in 3D/2D registration. Landmark based 3D/2D registration methods involving stereotactic headholders [3],

5 Page 5 of 40 pages skin fiducials [8], points located by digitizing wands [2], and anatomic landmarks located in image data have been tried. To produce a registration, a metric such as Procrustes that returns a minimal value when these points are registered is used and some form of optimization method is applied to locate this minimum. These methods lend themselves to rapid registration once the landmarks and their correspondences are defined. Unfortunately, the accuracy of landmark based registration is critically dependent on establishing a rigid correspondence between fiducials and the region of interest and on precisely locating their positions on the image. Automatic definition of artificial landmarks is normally possible, but the relation of these landmarks to the anatomy is sometimes nonrigid, as when they are placed on the skin or when they are well separated from the anatomic target by soft tissue. Stereotactic headholders eliminate this shifting but can be uncomfortable to the patient and are not always favored by the surgeon. In addition, with methods dependent on artificial landmarks or headholders, retrospective registration is not possible. Choosing anatomic landmarks works well in an interactive environment where the input image has good contrast and a high resolution, but the accuracy of selection may be inadequate in noisy or blurred images. Moreover, the interactive selection of an adequate number of landmarks may be time consuming, and the automatic selection of these landmarks, while sometimes possible, is a challenging task of computer vision. Curve and surface based methods have an advantage over landmarks that they in effect encompass many points. They thus reduce the need for precise point placement but are not completely free of this requirement. They have the additional advantage of requiring the specification of only the few curve or surface component correspondences. Curve based methods may require the additional step of establishing point correspondences between curves but this can

6 Page 6 of 40 pages usually be done automatically. Among various methods used to accomplish this are those based on curvature (e.g., [3]) or proximity (e.g., []). Curves and boundaries have been employed in conjunction with surfaces in 3D/2D registration. For example, [] employs a curve based method for retrospective registration of angiographic and MRA data. [6] has developed a method of 3D/ 2D registration which registers the 2D silhouette of an object against its 3D segmented surface. A signed distance function indicates the goodness of fit, with values closer to zero being better. [4] employs pure surface-based registration in his image-guided surgical application by the use of structured light to deduce a 3D surface from 2D camera images. The cost function is based on computing smallest distance between points on the two surfaces. A local optimization method with multiple initial start points is employed to ensure robust convergence. Surface and curve based 3D/2D registration can provide a good degree of accuracy, but their extraction can be dependent on subjective judgment and may require a fairly timeconsuming preprocessing stage. For example both [6] and [4] require segmentation of the object s surface. Using manual segmentation, delineation of the boundary (and hence registration accuracy) is based on subjective judgement for, at current voxel resolutions, there remains a degree of fuzz around an object s boundary which makes locating edges difficult. Even with automated and semi-automated segmentation methods such as active contours [5], a different choice of parameter settings by the user can cause the segmented portion of the object to change. Moreover, differences in image contrast and noise levels will cause variations in the segmented result and must be manually compensated. In the next section, we present a registration method which addresses many of these problems.

7 Page 7 of 40 pages 2. Method We begin our discussion by giving an overview of a specific 3D/2D registration problem detailed in [6]. The key aspects of this problem serve to illustrate the main points in the formulation of our algorithm. In our application, we are given a 3D MR angiogram and a 2D x- ray angiogram of the same patient. We are also given all relevant intrinsic parameters of the 2D image such as source to film distance and size of the image plane. We would like to know the location and orientation of the 2D imaging device with respect to voxel coordinates when the 2D image was acquired. 2. Modeling the transform Let, be the imaging processes for the 3D MRA and 2D x-ray respectively. I 3D I 2D Ignoring MR induced distortions for now, is simply an isometry (see definition 4 in section I 3D 2.3). From definition 2, we have =. Since we wish to find the attitude of the x-ray TI 3D I 2D device in voxel coordinates, we may discard I 3D and write T = I 2D. From a modeling standpoint, I 2D R 3 R 2 where I 2D = PE such that P is a perspective transformation and E is the attitude matrix. Replacing I 2D with PE, we have T R 3 R 2 such that T = PE. We may assume P to be given or can be determined by various camera calibration techniques (e.g., [20]). Our algorithm will compute E. 2.2 Registration basis Our algorithm employs cores, a novel basis for registration. Cores are loci of generalized local maxima (ridges) of a medialness function derived from an underlying image. The core has

8 Page 8 of 40 pages been developed as an object description technique by Pizer and colleagues. Cores describe a simple figure in terms of the position of its medial axis at scales proportional to its width [2]. Here, the notion of width corresponds to the radius of the largest fuzzy circle that fits within a the figure. By a fuzzy circle, we mean a circle with a boundary that has been blurred by a normalized Gaussian whose standard deviation parameter is a function of the circle s radius. Equivalently, we may consider a circle with a hard boundary that best fits a normalized Gaussian blurred instance of the figure, where the degree of blurring is proportional to the radius of the circle. Fig. illustrates. r Fig. : A teardrop shaped object. The dotted line represents the core middle. It is the locus of a set of circles whose fuzzy boundaries are tangent to the object s boundaries. Since cores contain both spatial and width components, they lie in a space one dimension higher than the image from which they are extracted. For example, cores extracted from a 2D image are curves in the scale space R 2 R +. Note that the scale component is positive. Cores are classified according to their dimensionality and to the dimensionality of the source image. Thus, cores extracted from 2D images are classified as D from 2D cores and written as D 2. In higher dimensional spaces, cores are m-dimensional manifolds, where m< n and depends on the figure

9 Page 9 of 40 pages being extracted. For the particular case of 3-dimensions, objects whose cross section is approximately circular produce D cores while objects with elliptical cross sections produce D cores. We shall call the former tubular objects. A core middle is the orthogonal projection of a core onto its spatial components; it forms a skeleton. The left image of fig. 2 shows the core middle of a D core extracted from the carotid artery in a 2D image while the right image shows 2 width information. Fig. 2: Anterior-posterior (AP) angiograms. A core has been extracted from the carotid artery (arrow) and its spatial projection displayed on the left image. The right image shows the core s width information. Cores exhibit remarkable stability to noise, blurring and poor contrast [9]. These are situations in which boundary and landmark based algorithms generally do the worst. In addition, different medialness operators can be employed depending on the type of image encountered. A more detailed discussion is deferred until section 5. User interaction is still required for core extraction but is restricted to merely indicating the vicinity where a core can be found. A local search for an initial core point is performed automatically, and once this has been found, the entire core is automatically extracted. Thus, core extraction is completely deterministic even in cases

10 Page 0 of 40 pages where user-interaction cannot pinpoint the same location in successive repetitions. The degree of user interaction required is also considerably reduced. Excluding the case of self-occlusion, tubular objects with constant cross section have core middles which are invariant to orthographic projection. That is, projecting a 3D tubular object to 2D and generating a D core middle from it yields the same result as projecting the core 2 D 3 middle of the original object. To see this, consider a circle C with radius r and center c. In general, an orthographic projection of C produces an ellipse with major axis of length 2r such that c is projected onto the midpoint of the major axis. Fig. 3 (left) illustrates. Now consider a tube with a circular cross-section of constant radius. We may consider this to be a stack of infinitesimally thin circles such that the tube middle is the set of centers of each circle. An orthographic projection produces a 2D ribbon of width 2r. The middle of the projection is a locus of points such that (ignoring end points) it is at the midpoint of the shortest line connecting the two edges. But this line is precisely the major axis of a circular cross section (fig. 3, right). Thus, the projection of c falls on this locus and hence, the projection of the tube middle falls onto the middle of the tube projection. In practice, this observation remains valid for a perspective projection if the radius of the tube is small compared to its distance from the camera. In situations where the cross section is not constant, the deviation from this observation is small. The following analysis gives an idea of the magnitude of this error. We make the simplifying assumption that figure boundaries are unblurred and the core middle is the loci of circles whose boundary is tangent to at least two points on the figure.

11 Page of 40 pages r c Fig. 3: Left: orthographic projection of a circle. Right: cross-section of a tube. A general tubular object is defined by the locus of the boundary of a circle moving along a space curve such that its center is on the curve and the normal to the circle is tangent to the curve. The radius of the circle is a function of its distance along the curve. As for the case of a constant width tubular object, the orthographic projection of a general tube is defined by the orthographic projection of its circular cross sections, these projections still appear as ellipses and the projection of the tube middle lies on the midpoint of the major axes of these ellipses. However, it is now no longer necessarily true that the ribbon s (i.e., the projection of the tube) middle is the same as the projected tube middle. The extent to which this differs depends on the local curvature of the boundaries and rate of widening of the ribbon. An example illustrates. Consider a tube that widens only on one side. Fig. 4 shows a projection of such a tube from the side. In this situation, the cross section of the tube projects as a line. The solid line represents the projection of the tube middle. The core middle of the projection is the locus of centers of circles that are touching at least two points on the boundaries of the object. As the figure shows, this locus is different. To analyze the amount of deviation between ribbon middle and projected tube middle, we make the following observations. Locally, the boundaries of a ribbon may be considered to be

12 Page 2 of 40 pages r r r 2 r 3 r 3 Fig. 4: A tubular object with an abrupt change in width. The middle solid line represents the projection of the tube middle. The dashed line represents the core middle extraction from the tube projection. straight but not necessarily parallel. Let the ribbon boundaries be widening at an angle of θ. Let there be an abrupt change in the width of the boundary within this local neighborhood at points t and such that the new direction of the boundary forms an angle of φ with the old. Consider the t 2 extreme case when φ = 90 degrees. Fig. 5 (left) illustrates our argument. This may be considered to be a worst case since in practice, it is unlikely any tubular anatomical object will exhibit such behavior. The boundaries may be considered as planar curves. Call these curves a and b. The curvature function of a and b is everywhere zero except at the point of widening. That is, a plot of the curvature function for a and b produces a δ or point function. Recall that a core is the loci of centers of circles that best fit a normalized, Gaussian blurred instance of the object, where the degree of blurring is proportional to the circle s radius. Let σ be the degree of blurring, then the blurring effect limits the maximum value of the curvature function possible for a and b. A reasonable approximation of this limit can be obtained by convolving the curvature function with a D normalized Gaussian of standard deviation σ. Since the curvature is a δ function, its

13 Page 3 of 40 pages convolution with a normalized Gaussian produces an identity transformation of the Gaussian. Thus, the maximum curvature for blurred versions of a and b is Fig. 5 (right) illustrates 2πσ the extent of error possible using this estimate of maximum curvature. Locally, the ribbon boundary is straight except for a change in direction at (a similar change occurs for the other t boundary, which is not shown for clarity). The ellipse represents the projection of a circular cross section after this deviation and its midpoint is the projection of the tube middle for this cross c section. The center of the circle (given by ) gives the location of the D core middle. In the c 2 worst case, the length of c c 2 is equal to p p 2. A straightforward trigonometrical computation 2 gives rtanφtanθ p p 2 = = cosθ rtanφsinθ Equation () sin 2 θ Since tanφ , we may write 2πσ r sinθ c c Equation (2) 2πσ sin 2 θ As discussed previously, r and σ are related by a constant of proportionality, i.e. σ = kr. In our implementation, k is normally 0.5, so we have 2 sinθ c c Equation (3) π sin 2 θ

14 Page 4 of 40 pages We can reasonably expect that even for the most severe cases, θ 35 degrees, in which case c c 2 < pixels or about a half pixel error. The reader is reminded that this value is for an instance where φ = 90 degrees. That is, when the walls of the tube abruptly make a 90 degree turn from its initial direction. In typical cases, tubular anatomical objects such as blood vessels have an approximately constant or slowly growing width (e.g., θ 0 degrees) and do not widen 2 abruptly (i.e., tanφ «-- ), so we have c. That is, the error is considerably less than π c 2 «0.43 of a pixel φ p t θ φ p 2 t θ θ c 2 θ r θ c c 2 t 2 Fig. 5: Left: Thick lines represent the projection boundary. The D2 core point is located at the center of the circle, denoted y c 2. Here, φ = 90 degrees. Right: The effect of blurring is to restrict the value of φ. The maximum deviation of a core point from the tube middle (given by the line segment c c 2 ) can be computed given θ, φ and r. Thus, if we are careful to avoid areas of self occlusion, core middles provide us with a basis for registering tubular objects against their projections. This basis reduces the problem of registration between images to one of registration between sets of curves if a sufficient number of tubular objects are available.

15 Page 5 of 40 pages 2.2. Core extraction The algorithms describing a method for extracting D and cores can be found in 2 D 3 [0]. In the special case in which tubular objects have good contrast with their surroundings and possess fairly uniform intensities, a method for approximating cores using intensity ridges can be employed [2]. This method is significantly faster than true core computation and produces an acceptable approximation to true cores. Using this method, it takes less than 5 seconds on an HP 75/00 to extract a tubular object of approximately 250 voxels in length. Fig. 6 shows the cores extracted from a 3D MR angiogram (MRA) study of the head using this algorithm. The operator took approximately 20 minutes of user time on a DECStation 25 to extract more than 00 vessels. Applying the core extraction process to both 2D and 3D images then taking their core middles yield a set of 2D and 3D curves respectively. Curves allow a computationally appealing and robust optimization method to be employed. The following section describes how these two sets of curves are registered. Fig. 6: Left image shows core middles extracted from a 3D MRA study of the head. Right image is a 3D rendering of core middles with width information included. The red wireframe structure in the middle is the ventricle, shown for orientation purposes.

16 Page 6 of 40 pages 2.3 Curve-based 3D/2D registration algorithm Since we are dealing with images taken from the same object, a 2D curve will have a corresponding 3D counterpart. Note that the special case of a curve seen end-on results in a point; we may consider this to be a constant curve, i.e. a curve such that β ( s) = p s where p is a constant. A 2D/3D curve pair may have endpoints that do not necessarily coincide. It is thus necessary for the registration algorithm to take this into consideration. One method is to determine the common segments on each corresponding 2D/3D curve pair. Given a perfect registration, this task is trivial since we can project the 3D curve and then do a pointwise match with its 2D counterpart. Since we do not have this perfect situation, our algorithm deals with this lack by adopting an iterative two-pass approach using a paradigm similar to that of [4], [26] and [5]. At the beginning of each pass, we have an estimate of the true registration (in the first pass, the user provides an approximate guess). An alignment phase uses this guess to extract a list of common points for each pair of curves (Alignments are discussed in greater detail in section 2.4.3). A local gradient descent phase uses these points to refine the current estimate of the registration value. The first phase is then called again to provide a better alignment and the algorithm iterates until it converges to a satisfactory solution as determined by a suitable metric. The method of gradient descent we use is similar to Lowe s algorithm [7] but we employ a different method of expressing the registration transformation which is better suited for our purpose and is slightly more complex than Lowe s. Another variant is also mentioned in the appendix of [22] but details of its derivation are not given. The remainder of this section gives a

17 Page 7 of 40 pages precise statement of the problem the algorithm is to solve and describes the details of the algorithm. Definition 3 Let S denote a set of curves in 3-space. i.e., S = { α j ( s) j = 2,, }, where α j s R 3 is a C 3 or greater curve.. Definition 4 Let = PE be a perspective projection from R 3 to R 2 such that P R 3 R 2 and E is T p an orientation preserving isometry. i.e., the orthogonal component of E has a positive determinant. The term extrinsic will also be used to refer to E. As mentioned in the introduction, is intended to model an x-ray device that produces 2D images. As a first-order approximation of such a device, we write T p P( x, y, z) = xf ---, yf --- z z Equation (4) i.e., the x-ray camera performs a perspective transformation with focal length f. Parameters controlling the orientation and location of the x-ray camera are described by E. A more accurate model of the physical system must consider other factors image skew, aspect ratio and optical center among others. It is the goal of the algorithm to compute E. For the purpose of this discussion we assume an ideal camera. i.e., P is given by equation 4.

18 Page 8 of 40 pages Definition 5 Let S( T p ) = { γ j γ j = T p α j α j S} be the set of 2D planar projections under T p of curves in A. γ j is assumed to be C 3 of greater. Definition 6 Let A denote a set of planar curves. i.e., A = { β k ( s) k = 2,, } where β k s R 2 is a or greater curve that is noisy and a possibly incomplete projection of a curve in S. C 3 Given the above, the problem may be stated as follows: given S, P and A, compute E (and hence T p ) such that the set A matches S( T p ) as closely as possible when using the metric given in definition 9, section Establishing Curve Correspondence Definition 7 A correspondence function ka { 23 ℵ,,, + } associates a curve β A with the index number of a curve in S and S( T p ). We use the correspondence function to relate curves in A with their 3D counterparts in S. An example will make this clearer. Recall from definition 3 that S is an indexed set. i.e., each element α in S has a unique index value. Suppose the counterpart of β i in A is α j in S. Then k ( β i ) = j and we say that ( β i, α k ( βj )) form a correspondence pair. k can be used to express correspondence between A and S( T p ) as well. From definition 5, we note that curves in S( T p ) have an identity correspondence with S. i.e.

19 Page 9 of 40 pages ( α l, γ m ) S S( T p ) is a correspondence pair iff l = m l { 2,,, S }, m { 2,,, S( T p ) }. Thus, we may write ( β i, γ k ( βi) ) iff ( β i, α k ( βi )). Where it is unambiguous, we relax the notation somewhat and write k ( β i ) = α j or k ( β i ) = γ j. In our present implementation, k is defined interactively. An automated means of establishing curve correspondence based on spatial proximity is also possible and will be implemented in a future implementation Algorithm description The algorithm takes as input ( APSE,,, 0, k), where E 0 is an estimate of E, k is a curve correspondence function and the other variables are as defined previously. Given the initial estimate of E, the algorithm iteratively refines this estimate. Each iteration goes through two E 0 phases, computing the curve alignment and refining the estimate to E. The algorithm terminates when a mean square distance metric comparing A with S( T p ) falls below a threshold.. set E E 0 2. while threshold not reached T p PE Compute S( T p ) 5. for each pair β j γ k ( βj ), 6. establish curve alignment function f j between β j and γ k ( βj ). 7. establish curve alignment between and α k βj by transitivity with γ k ( βj ). 8. end for 9. update Eusing ( ASPEk,,,,, { f j }) as input. 0. end while β j ( )

20 Page 20 of 40 pages Establishing curve correspondence Lines 5 to 7 of the algorithm require that each curve in A be placed in correspondence with curves in S( T p ) and S. The function k defines this pairing Establishing curve alignment Given a pairing ( β i, γ k ( βi )), an alignment function fd R is one that has the property β i ( s) = γ k ( βi ) ( f( s) ). Note that f does not satisfy the true mathematical definition of a function in that it may not be defined for all points in D. In our implementation, we choose a proximity-based approach. This method depends on spatial proximity between curve projections. It has been found to be robust under conditions of noise and in pairings where the curves are of different length. Let ( β, γ) be a curve pairing, where both β and γ are planar curves. Let the Frenet apparatus be ( T β, N β, B β, κ β ) and ( T γ, N γ, B γ, κ γ ) respectively (note that torsion τ is 0 for planar curves). Definition 8 Let ( βγ, ) A S( T p ) be a correspondence pair. Let p β, ρ γ β γ. The perpendicular point ( p) on γ is the closest point to p when measured along the line passing ρ γ through p and parallel to N β ( p), Fig. 7 illustrates. Proximity alignment begins with the assumption that β and γ are spatially close to each other. Using proximity alignment, f can thus be defined as f D R such that γ ( f( s) ) = ρ γ ( β ( s) ). In practice, f is computed for discrete values by taking a set of

21 Page 2 of 40 pages p T β β N β γ ρ γ ( p) Fig. 7: The perpendicular point ρ γ ( p) points { p, p 2,, p n } on β, evaluating ρ γ for each p, p 2,, p n and then computing the corresponding parameterization of γ for ρ γ ( p ), ρ γ ( p 2 ),, ρ γ ( p n ). This simple function works well even when the curves are not close to each other if simple consistency checks are imposed, such as not accepting any point pairing if the distance between p and ρ γ ( p) is greater than two standard deviations from the average distance between all points q β and ρ γ ( q). registration is. Definition 9 The perpendicular point can also be used in providing a measure of how accurate a β ( s) ρ Define the distance measure M( A, S( T p )) to be k ( β) ( β ( s) ) 2, s β A l ( β) where k is the correspondence function (here we use the informal assumption that ka S( T p ) ) and l ( β) is the arc length function for β. In words, M is the mean square value of the perpendicular distance between points on β and its paired curve γ summed over all β A.

22 Page 22 of 40 pages In practice, the distance measure is computed at discrete points along β. i.e., let B β = { β ( s ), β( ( s 2 ),, β( s n ))} be a set of n uniformly distributed points on the curve β. p ρ k ( β) ( p) Then M( A, S( T p )) = p B β where is the cardinality of B β B β β A B β Refining E As in the case for the correspondence function, since S( T p ) is a projection of S, the alignment between α S and γ S( T p ) is given by the identity function. Thus, given an alignment function for each curve pairing ( βγ, ) A S( T p ). The alignment function for ( βα, ) A S, where γ = T p α is also known. At this point, a relation between projected points and their 3D counterparts w has w Tp been established. This can be used to compute the extrinsic E. The method developed is based on an improvement to Lowe s algorithm [7]. A improvement very similar to the method described below is suggested in the appendix of [9] but no details as regards its derivation are given. Let = T p w where w R 3, T p = PE and E is an orientation preserving isometry. w Tp E may also be viewed as a function E R 3 R 3 such that E( x) = R( x t), where R R 3 R 3 is an orthogonal transformation that is orientation preserving (and hence, excludes reflections) and t R 3 is a translation vector. Since R may be expressed as the composition of

23 Page 23 of 40 pages rotations φ x, φ y, φ z about the x,y and z axes respectively, we can write E ( φ x, φ y, φ z, t x, t y, t z ). Since P is given, we have T p ( φ x, φ y, φ z, t x, t y, t z ). Writing as a first order Taylor series about T P0 = PE 0 yields T P = T P0 + dt p ( φ x, φ y, φ z, t x, t y, t z ) Equation (5) where dt P is the Jacobian of partial derivatives of T p ( φ x, φ y, φ z, t x, t y, t z ) at 0. Now, suppose we wish to determine E and are given w, w Tp, P and E 0 such that w Tp PE 0 w ε then since is close to E, E 0 w Tp = T p w = ( T P0 + dt p ( φ x, φ y, φ z, t x, t y, t z ))w Thus, w Tp T P0 w = dt p ( φ x, φ y, φ z, t x, t y, t z )w Equation (6) Since a correct registration will put directly over w, this reduces to w TP dt p ( φ x, φ y, φ z, t x, t y, t z )w = 0 Equation (7) This is essentially one step of Newton s method with multiple variables. As with Newton s method, iteration to a satisfactory convergence is required in situations where PE 0 w is w Tp within a small neighborhood of 0. Two remarks can be made about the discussion up to this point.

24 Page 24 of 40 pages First, the correction vector ( φ x, φ y, φ z, t x, t y, t z ) can be expressed as an extrinsic, say E. Second, if the correction vector is small, then T p = PE E 0. Let E = E E 0, then T P = PE. Thus, successive iterations produce T P2, T P3,, T Pn and E 2, E 3,, E n where T Pn = PE n and E i + = E i E i E 0. When to halt the process may be determined by using some termination criterion (e.g., when T Pn w ε or by setting an upper bound on the w Tp number of iterations to compute). On termination, gives the best approximation to E. E n For reasons of stability and minimization of perturbations due to noise and other factors, a set of points { w, w 2,, w m } is used in conjunction with a mean square method to solve equation Computing dt p Recall that ( x) = PEx, where x R 3 and P R 3 R 2 is a projection such that T p P( x, y, z) fx ---, fy for some constant. Writing yields. z --- fχ x = z f χ = Ex T p ( uv, ) fχ y = =, χ z χ z The partial derivatives of can be expressed as T p d( T p ) u = f dµ ---- dχ x dχ z dµ χ2 z dµ χ z χ x d( T p ) v f ---- dχ y χ dχ y z = dµ dµ χ2 z dµ χ z Equation (8) Equation (9) Equation (0)

25 Page 25 of 40 pages for µ = { φ x, φ y, φ z, t x, t y, t z }. Now χ = Ex = R ( x t) where x R 3 and R is the composition of three rotations such that R = cos φ y cos φ z cosφ y sinφ y sinφ x sinφ y + cosφ x sinφ x sinφ y + cosφ x sinφ x cosφ y cosφ x sinφ y + sinφ x cosφ x sinφ y + sinφ x cosφ x cosφ y Taking partial derivatives of χ, we have Equation () dχ x = 0 dφ x Equation (2) dχ x = sinφ dφ y ( x x t x ) + sinφ y ( x y t y ) + cosφ y ( x z t z ) x dχ x = cosφ dφ y ( x x t x ) cosφ y ( x y t y ) x dχ x = cosφ dt y x dχ x = cosφ dt y y dχ x = sinχ dt y z Equation (3) Equation (4) Equation (5) Equation (6) Equation (7) dχ y = ( cosφ dφ x sinφ y sinφ x ) ( x x t x ) x ( sinφ x cosφ x sinφ y ) x y t y ( ) cosφ x cosφ y ( x z t z ) Equation (8) dχ y = sinφ dφ x cosφ y ( x x t x ) y

26 Page 26 of 40 pages sinφ x cosφ y ( x y t y ) + sinφ x sinφ y ( x z t z ) Equation (9) dχ y = ( sinφ dφ x sinφ y + cosφ x ) ( x x t x ) z + cosφ x sinφ x sinφ y ( ) ( x y t y ) Equation (20) dχ y dt x = sinφ x sinφ y cosφ x dχ y dt y = cosφ x + sinφ x sinφ y dχ y dt z = sinφ x cosφ y Equation (2) Equation (22) Equation (23) dχ z = ( cosφ dφ x + sinφ x sinφ y ) ( x x t x ) x + ( sinφ x sinφ y + cosφ x ) ( x y t y ) sinφ x cosφ y x x t z ( ) Equation (24) dχ z dφ y = cosφ x cosφ y ( x x t x ) + cosφ x cosφ y ( x y t y ) cosφ x sinφ y ( x x t z ) Equation (25) dχ z = ( sinφ dφ x + cosφ x sinφ y ) ( x x t x ) z + ( cosφ x sinφ y sinφ x ) ( x y t y ) Equation (26) dχ z = cosφ dt x sinφ y sinφ x x dχ z = cosφ dt x sinφ y sinφ x y Equation (27) Equation (28)

27 Page 27 of 40 pages dχ z = cosφ dt x cosφ y z Equation (29) E 0 can be treated as the initial coordinate frame, allowing us to evaluate the partial derivatives at 0 to produce the following tables. dφ x dφ y dφ z dt x dt y dt z χ x χ y χ z 0 z -y z 0 x 0-0 y -x Table : Partial derivatives of χ evaluated at 0 u v t x f - 0 z t y 0 f - z t z fx ---- z 2 fy ---- z 2 φ x fxy f y z 2 z 2 φ y f x fxy z 2 z 2 φ z fy z fx --- z Table 2: Partial derivatives of u and v evaluated at 0

28 Page 28 of 40 pages 3. Experiments A series of experiments were conducted to evaluate the accuracy and robustness of our algorithm. Two 3D studies were obtained. Study A is a D MRI scan of the head with voxel size cm. Study B is a D MRI scan of the head from a different patient with voxel sizes of cm. For both Study A and Study B, a set of D vessels are extracted using width-augmented intensity ridges D 3 vessels were extracted from Study A and 09 D vessels from study B. A perspective projection 3 is then applied to the segmented vessels to produce simulated angiograms. Curves were then extracted from the simulated angiograms and used together with the 3D curves as input to the algorithm. In the experiments, the studies were projected to 2D using a known extrinsic (call this E k ) to produce a 2D image. This method allows us to compare the computed value of the extrinsic (call this ) with the true value during accuracy trials. By varying parameters such E c E k as number of curves used and the extent to which differed from, we are able to evaluate the performance of our algorithm. 3. Evaluation of registration accuracy Let P = { r r α, α S} be a set of evenly spaced points along each 3D curve. Let E c E k P k = E k P, P c = E c P. i.e., the set of points P are transformed by the actual and computed extrinsic respectively. We require points in P k and P c to be paired. i.e., if p i P k and q i P c, then p i = E k r i and q i = E c r i, where r i P. Let d i = p i q i be the Euclidean distance of each pair of points. If E k = E c, then d i = 0 i { 23,,,, P }. If E k E c, at least some of the

29 Page 29 of 40 pages distance values will be nonzero. By evaluating the maximum, minimum, average and standard deviation of these distances, it will be possible to quantify the extent by which. 3.2 Evaluation of spatial distribution We would like to know how the behavior of our algorithm varies for different choices of basis curves. We would expect curves that are well separated spatially to work better than curves that are closely clustered. At the same time, we would expect a better registration with a large number of curves than with a small number. In order to quantify our notion of spread or spatial distribution, we choose to compute the determinant of the second order central moments of regularly spaced points along both 3D and 2D curves as our measure. That is, let the set P be as defined in section 3.. Define Q = { w w β, β A} to be a set of evenly spaced points along each 2D curve. Then the 2D measure is given by the determinant of the 2 2 matrix E k E c µ 20 µ where µ i ij w x w j = x w y w y, i+ j = 2, w = w while the µ µ Q 02 w Q w Q µ 200 µ 0 µ 0 3D measure is given by the determinant of µ 0 µ 020 µ 0 such that µ 0 µ 0 µ 002 µ i ijk r x r j x r y r k = y r z r z, i+ j + k = 2 and r = r. P r P r P

30 Page 30 of 40 pages 3.3 Experiment : 3D/2D registration of simulated angiographic data This experiment is a general test of the algorithm in an interactive environment. Three such angiograms were generated, one from Study A and two from Study B, each with different extrinsics. The particular views chosen had a large degree of overlap between basis objects in projection. 3 D cores were extracted from each angiogram for registration. To ensure fairness, 2 the individual producing the angiograms was different from the individual performing the registration. The value of was not communicated between them until after the experiment. E 0 Fig. 8 shows the angiograms. Fig. 8: Simulated angiogram images used for registration experiments. 3.4 Experiment 2: Test sensitivity to choice of initial approximation A good registration algorithm should be relatively insensitive to the choice of initial approximation of the extrinsic. This experiment evaluates the algorithm in this respect. For each angiogram in experiment, an initial approximation to the extrinsic was generated by taking the actual solution and applying a set of random transformations and rotations within a given range. Various ranges were tried and for each range, 50 trials were performed.

31 Page 3 of 40 pages 3.5 Experiment 3: Test of sensitivity to choice of basis We would expect that increasing the number and spread of basis curves will improve registration accuracy. Depending on the particular image, it is also likely that a carefully chosen basis with fewer curves in total may result in better or equal performance than a bad choice with more curves. To test our supposition, we conduct a series of runs on both Study A and Study B with the initial set of 3 2D cores obtained in experiment. We choose random subsets of these 3 cores and perform a registration on them, keeping all other factors such as the initial approximation and termination threshold the same. We perform this experiment using 50 random subsets each of 4, 8 and 0 cores chosen from the original Results 4. Experiment Table 3 gives the results of experiment using the evaluation method described in section 3.. Note that in all cases, the maximum displacement is less than a voxel. Angiogram # Results after registration max. displacement vector (cm) max. displacement distance (cm) ( 0.03, 0.05, 0.07) ( 0.05, 0.05, 0.07)) ( 0.06, 0.03, 0.02) 0.06 Table 3: Results of experiment

32 Page 32 of 40 pages 4.2 Experiment 2 Fig. 9 shows the results of this experiment. The x-axis of each scatterplot gives the initial minimum displacement using the accuracy measure of section 3.. That is, points along the set of 3D curves are displaced by a distance at least that given by the x-axis. The y-axis gives the final maximum displacement after registration. That is, points along the 3D curves are at most this distance away from their true position. For both Study A and B, final displacement is within one voxel for minimum initial displacements of up to 7cm. Given than the longest dimension of each study is at most 20 cm, this implies that the initial starting displacement can be at least 35% of the longest dimension. Even at distances greater than 7cm, a significant portion of the test cases still converge to solutions with a maximum displacement error of less than one voxel. It can be seen from the scatterplots that where convergence occurs, the residual error is less than one voxel. This suggest that with interactive assistance to reject clearly divergent solutions, it is possible to obtained good results with initial displacements considerably greater than 7cm. 4.3 Experiment 3 Fig. 0 shows the results of experiment 3 on Study A. The figure shows a series of scatterplots. The x-axis giving the moment of inertia measure and the y-axis gives the maximum final displacement after registration. The left and right columns contain plots for 2D and 3D moments of inertia respectively. Increasing the number of basis curves increases our measure of spread, which improves the registration. When the subset is too small (top row), registration is equally likely to succeed as it is to fail. Increasing the subset gives a more consistent success rate. The results of this experiment suggests that it is possible to get a good registration with a good choice of as few as eight basis curves.

33 Page 33 of 40 pages max. final displacement (cm) 0. max. final displacement (cm) min. initial displacement (cm) min. initial displacement (cm) max. final displacement (cm) max. final displacement (cm) min. initial displacement (cm) min. initial displacement (cm) Fig. 9: Results of experiment 2 for Study A (top row) and B (bottom row) displayed as scatterplots. In both experiments, convergence occurred in every instance for initial displacements of less than 0 cm.

34 Page 34 of 40 pages choose 4 3 choose max. final displacement (cm) 0. max. final displacement (cm) e+06 e+07 e+08 e+09 2D Moment of Inertia 3D Moment of Inertia choose 8 3 choose max. final displacement (cm) 0. max. final displacement (cm) e+08 e+09 e+0 2D Moment of Inertia 3D Moment of Inertia choose 0 3 choose max. final displacement (cm) 0. max. final displacement (cm) e+08 e+09 e+0 2D Moment of Inertia 3D Moment of Inertia Fig. 0: Results of experiment 3 on study A. Increasing the number of basis curves improves the registration. Fig. shows the results of experiment 3 on study B. As before, increasing the number of basis curves improves registration. For this particular study, there is a significant correlation between moment of inertia and degree of accuracy when the subset is small (top row). This suggests that by choosing basis curves with care, it is possible to do a good registration by using as few as four cores.

35 Page 35 of 40 pages choose 4 3 choose max. final displacement (cm) 0. max. final displacement (cm) e+06 e+07 e+08 e+09 2D Moment of Inertia 3D Moment of Inertia choose 8 3 choose max. final displacement (cm) 0. max. final displacement (cm) e+08 e+09 e+0 2D Moment of Inertia 3D Moment of Inertia choose 0 3 choose max. final displacement (cm) 0. max. final displacement (cm) e+08 e+09 e+0 2D Moment of Inertia 3D Moment of Inertia Fig. : Results of experiment 3, Study B. When performing registration using four curves, a better spread gives better performance. Increasing the number of curves improves performance. 5. Discussion The strengths of our method lie in the choice of registration basis. Landmark based registration methods generally operate on a limited number of points. Curves can be considered as -dimensional point sets. The number of points available make it more stable against outliers and random perturbations. At the same time, curves are mathematically simpler objects than surfaces

36 Page 36 of 40 pages and algorithms developed for curves are computationally less complex. As a result, curve-based registration algorithms can be made to run very efficiently and take less time to reach a solution without sacrificing accuracy and stability. Cores share with methods based on surfaces and surface curves in that they allow the automatic determination of pointwise correspondence based on curvature or proximity. This ability makes registration using all these bases insensitive to the ending or breaking of the curves or surfaces. Cores provide the additional advantage that correspondence can be based on width or rate of width change of the figures that they represent. An additional advantage of cores is the ease with which they can be interactively extracted from the image data. A single point and click suffices to select the object figure of interest, and the core is then automatically extracted. Fully automatic core extraction can be based on models incorporating not only figural shape but interfigural relations and boundary/core relations. This is in sharp contrast to manual segmentation and landmark based systems, where user interaction is essential for precise placement. Compared with semi-automated active contour methods, cores require only one user-specified parameter: the approximate location and scale of the starting point. Moreover, the input parameter is independent of image quality. Since this parameter is merely a guess to initialize the algorithm, the accuracy is not important. In contrast, semiautomated segmentation algorithms employing active contours or a similar paradigm require a balance between several user specified parameters that is image specific. The resultant segmentation is also noticeably affected by initial user settings. Curve based registration is not new, and their advantages as well as shortcomings have been well documented. However, our choice of using core middles as curves eliminates many of the problems that have traditionally plagued curve based registration. Cores operate at the scale of

3D/2D Registration Via Skeletal Near Projective Invariance in Tubular Objects

3D/2D Registration Via Skeletal Near Projective Invariance in Tubular Objects 3D/2D Registration Via Skeletal Near Projective Invariance in Tubular Objects Alan Liu 1, Elizabeth Bullitt 2, Stephen M. Pizer 2 1. Presently at the Center for Information-Enhanced Medicine National University

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

Probabilistic Tracking and Model-based Segmentation of 3D Tubular Structures

Probabilistic Tracking and Model-based Segmentation of 3D Tubular Structures Probabilistic Tracking and Model-based Segmentation of 3D Tubular Structures Stefan Wörz, William J. Godinez, Karl Rohr University of Heidelberg, BIOQUANT, IPMB, and DKFZ Heidelberg, Dept. Bioinformatics

More information

HOUGH TRANSFORM CS 6350 C V

HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges

More information

An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy

An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy Chenyang Xu 1, Siemens Corporate Research, Inc., Princeton, NJ, USA Xiaolei Huang,

More information

Automatic Vascular Tree Formation Using the Mahalanobis Distance

Automatic Vascular Tree Formation Using the Mahalanobis Distance Automatic Vascular Tree Formation Using the Mahalanobis Distance Julien Jomier, Vincent LeDigarcher, and Stephen R. Aylward Computer-Aided Diagnosis and Display Lab, Department of Radiology The University

More information

COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates

COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates Department of Computer Science and Software Engineering The Lecture outline Introduction Vectors and matrices Translation

More information

Graphics and Interaction Transformation geometry and homogeneous coordinates

Graphics and Interaction Transformation geometry and homogeneous coordinates 433-324 Graphics and Interaction Transformation geometry and homogeneous coordinates Department of Computer Science and Software Engineering The Lecture outline Introduction Vectors and matrices Translation

More information

CoE4TN4 Image Processing

CoE4TN4 Image Processing CoE4TN4 Image Processing Chapter 11 Image Representation & Description Image Representation & Description After an image is segmented into regions, the regions are represented and described in a form suitable

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Lecture 15: Segmentation (Edge Based, Hough Transform)

Lecture 15: Segmentation (Edge Based, Hough Transform) Lecture 15: Segmentation (Edge Based, Hough Transform) c Bryan S. Morse, Brigham Young University, 1998 000 Last modified on February 3, 000 at :00 PM Contents 15.1 Introduction..............................................

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Modern Medical Image Analysis 8DC00 Exam

Modern Medical Image Analysis 8DC00 Exam Parts of answers are inside square brackets [... ]. These parts are optional. Answers can be written in Dutch or in English, as you prefer. You can use drawings and diagrams to support your textual answers.

More information

Comparison of Vessel Segmentations using STAPLE

Comparison of Vessel Segmentations using STAPLE Comparison of Vessel Segmentations using STAPLE Julien Jomier, Vincent LeDigarcher, and Stephen R. Aylward Computer-Aided Diagnosis and Display Lab The University of North Carolina at Chapel Hill, Department

More information

CS 231A Computer Vision (Fall 2012) Problem Set 3

CS 231A Computer Vision (Fall 2012) Problem Set 3 CS 231A Computer Vision (Fall 2012) Problem Set 3 Due: Nov. 13 th, 2012 (2:15pm) 1 Probabilistic Recursion for Tracking (20 points) In this problem you will derive a method for tracking a point of interest

More information

MR IMAGE SEGMENTATION

MR IMAGE SEGMENTATION MR IMAGE SEGMENTATION Prepared by : Monil Shah What is Segmentation? Partitioning a region or regions of interest in images such that each region corresponds to one or more anatomic structures Classification

More information

Chapter 4. Clustering Core Atoms by Location

Chapter 4. Clustering Core Atoms by Location Chapter 4. Clustering Core Atoms by Location In this chapter, a process for sampling core atoms in space is developed, so that the analytic techniques in section 3C can be applied to local collections

More information

Lecture 13 Theory of Registration. ch. 10 of Insight into Images edited by Terry Yoo, et al. Spring (CMU RI) : BioE 2630 (Pitt)

Lecture 13 Theory of Registration. ch. 10 of Insight into Images edited by Terry Yoo, et al. Spring (CMU RI) : BioE 2630 (Pitt) Lecture 13 Theory of Registration ch. 10 of Insight into Images edited by Terry Yoo, et al. Spring 2018 16-725 (CMU RI) : BioE 2630 (Pitt) Dr. John Galeotti The content of these slides by John Galeotti,

More information

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG. Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview

More information

Guidelines for proper use of Plate elements

Guidelines for proper use of Plate elements Guidelines for proper use of Plate elements In structural analysis using finite element method, the analysis model is created by dividing the entire structure into finite elements. This procedure is known

More information

Norbert Schuff VA Medical Center and UCSF

Norbert Schuff VA Medical Center and UCSF Norbert Schuff Medical Center and UCSF Norbert.schuff@ucsf.edu Medical Imaging Informatics N.Schuff Course # 170.03 Slide 1/67 Objective Learn the principle segmentation techniques Understand the role

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

Comparative Study of ROI Extraction of Palmprint

Comparative Study of ROI Extraction of Palmprint 251 Comparative Study of ROI Extraction of Palmprint 1 Milind E. Rane, 2 Umesh S Bhadade 1,2 SSBT COE&T, North Maharashtra University Jalgaon, India Abstract - The Palmprint region segmentation is an important

More information

Shape-based Diffeomorphic Registration on Hippocampal Surfaces Using Beltrami Holomorphic Flow

Shape-based Diffeomorphic Registration on Hippocampal Surfaces Using Beltrami Holomorphic Flow Shape-based Diffeomorphic Registration on Hippocampal Surfaces Using Beltrami Holomorphic Flow Abstract. Finding meaningful 1-1 correspondences between hippocampal (HP) surfaces is an important but difficult

More information

Perspective Projection [2 pts]

Perspective Projection [2 pts] Instructions: CSE252a Computer Vision Assignment 1 Instructor: Ben Ochoa Due: Thursday, October 23, 11:59 PM Submit your assignment electronically by email to iskwak+252a@cs.ucsd.edu with the subject line

More information

Subpixel Corner Detection Using Spatial Moment 1)

Subpixel Corner Detection Using Spatial Moment 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 25 Subpixel Corner Detection Using Spatial Moment 1) WANG She-Yang SONG Shen-Min QIANG Wen-Yi CHEN Xing-Lin (Department of Control Engineering, Harbin Institute

More information

Biomedical Image Analysis. Point, Edge and Line Detection

Biomedical Image Analysis. Point, Edge and Line Detection Biomedical Image Analysis Point, Edge and Line Detection Contents: Point and line detection Advanced edge detection: Canny Local/regional edge processing Global processing: Hough transform BMIA 15 V. Roth

More information

Segmentation and Modeling of the Spinal Cord for Reality-based Surgical Simulator

Segmentation and Modeling of the Spinal Cord for Reality-based Surgical Simulator Segmentation and Modeling of the Spinal Cord for Reality-based Surgical Simulator Li X.C.,, Chui C. K.,, and Ong S. H.,* Dept. of Electrical and Computer Engineering Dept. of Mechanical Engineering, National

More information

Chapter 11 Arc Extraction and Segmentation

Chapter 11 Arc Extraction and Segmentation Chapter 11 Arc Extraction and Segmentation 11.1 Introduction edge detection: labels each pixel as edge or no edge additional properties of edge: direction, gradient magnitude, contrast edge grouping: edge

More information

Feature Detectors and Descriptors: Corners, Lines, etc.

Feature Detectors and Descriptors: Corners, Lines, etc. Feature Detectors and Descriptors: Corners, Lines, etc. Edges vs. Corners Edges = maxima in intensity gradient Edges vs. Corners Corners = lots of variation in direction of gradient in a small neighborhood

More information

RIGID IMAGE REGISTRATION

RIGID IMAGE REGISTRATION RIGID IMAGE REGISTRATION Duygu Tosun-Turgut, Ph.D. Center for Imaging of Neurodegenerative Diseases Department of Radiology and Biomedical Imaging duygu.tosun@ucsf.edu What is registration? Image registration

More information

Using Subspace Constraints to Improve Feature Tracking Presented by Bryan Poling. Based on work by Bryan Poling, Gilad Lerman, and Arthur Szlam

Using Subspace Constraints to Improve Feature Tracking Presented by Bryan Poling. Based on work by Bryan Poling, Gilad Lerman, and Arthur Szlam Presented by Based on work by, Gilad Lerman, and Arthur Szlam What is Tracking? Broad Definition Tracking, or Object tracking, is a general term for following some thing through multiple frames of a video

More information

THE preceding chapters were all devoted to the analysis of images and signals which

THE preceding chapters were all devoted to the analysis of images and signals which Chapter 5 Segmentation of Color, Texture, and Orientation Images THE preceding chapters were all devoted to the analysis of images and signals which take values in IR. It is often necessary, however, to

More information

Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model

Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model Jianhua Yao National Institute of Health Bethesda, MD USA jyao@cc.nih.gov Russell Taylor The Johns

More information

Learning-based Neuroimage Registration

Learning-based Neuroimage Registration Learning-based Neuroimage Registration Leonid Teverovskiy and Yanxi Liu 1 October 2004 CMU-CALD-04-108, CMU-RI-TR-04-59 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract

More information

Segmentation and Grouping

Segmentation and Grouping Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation

More information

HOUGH TRANSFORM. Plan for today. Introduction to HT. An image with linear structures. INF 4300 Digital Image Analysis

HOUGH TRANSFORM. Plan for today. Introduction to HT. An image with linear structures. INF 4300 Digital Image Analysis INF 4300 Digital Image Analysis HOUGH TRANSFORM Fritz Albregtsen 14.09.2011 Plan for today This lecture goes more in detail than G&W 10.2! Introduction to Hough transform Using gradient information to

More information

Adaptive Fuzzy Connectedness-Based Medical Image Segmentation

Adaptive Fuzzy Connectedness-Based Medical Image Segmentation Adaptive Fuzzy Connectedness-Based Medical Image Segmentation Amol Pednekar Ioannis A. Kakadiaris Uday Kurkure Visual Computing Lab, Dept. of Computer Science, Univ. of Houston, Houston, TX, USA apedneka@bayou.uh.edu

More information

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:

More information

SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS.

SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS. SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS. 1. 3D AIRWAY TUBE RECONSTRUCTION. RELATED TO FIGURE 1 AND STAR METHODS

More information

Coarse-to-fine image registration

Coarse-to-fine image registration Today we will look at a few important topics in scale space in computer vision, in particular, coarseto-fine approaches, and the SIFT feature descriptor. I will present only the main ideas here to give

More information

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

A Multiple-Layer Flexible Mesh Template Matching Method for Nonrigid Registration between a Pelvis Model and CT Images

A Multiple-Layer Flexible Mesh Template Matching Method for Nonrigid Registration between a Pelvis Model and CT Images A Multiple-Layer Flexible Mesh Template Matching Method for Nonrigid Registration between a Pelvis Model and CT Images Jianhua Yao 1, Russell Taylor 2 1. Diagnostic Radiology Department, Clinical Center,

More information

Silhouette Coherence for Camera Calibration under Circular Motion

Silhouette Coherence for Camera Calibration under Circular Motion Silhouette Coherence for Camera Calibration under Circular Motion Carlos Hernández, Francis Schmitt and Roberto Cipolla Appendix I 2 I. ERROR ANALYSIS OF THE SILHOUETTE COHERENCE AS A FUNCTION OF SILHOUETTE

More information

Announcements. Edge Detection. An Isotropic Gaussian. Filters are templates. Assignment 2 on tracking due this Friday Midterm: Tuesday, May 3.

Announcements. Edge Detection. An Isotropic Gaussian. Filters are templates. Assignment 2 on tracking due this Friday Midterm: Tuesday, May 3. Announcements Edge Detection Introduction to Computer Vision CSE 152 Lecture 9 Assignment 2 on tracking due this Friday Midterm: Tuesday, May 3. Reading from textbook An Isotropic Gaussian The picture

More information

A MORPHOLOGY-BASED FILTER STRUCTURE FOR EDGE-ENHANCING SMOOTHING

A MORPHOLOGY-BASED FILTER STRUCTURE FOR EDGE-ENHANCING SMOOTHING Proceedings of the 1994 IEEE International Conference on Image Processing (ICIP-94), pp. 530-534. (Austin, Texas, 13-16 November 1994.) A MORPHOLOGY-BASED FILTER STRUCTURE FOR EDGE-ENHANCING SMOOTHING

More information

CS 223B Computer Vision Problem Set 3

CS 223B Computer Vision Problem Set 3 CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

IRIS SEGMENTATION OF NON-IDEAL IMAGES

IRIS SEGMENTATION OF NON-IDEAL IMAGES IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322

More information

Computer Vision I Name : CSE 252A, Fall 2012 Student ID : David Kriegman Assignment #1. (Due date: 10/23/2012) x P. = z

Computer Vision I Name : CSE 252A, Fall 2012 Student ID : David Kriegman   Assignment #1. (Due date: 10/23/2012) x P. = z Computer Vision I Name : CSE 252A, Fall 202 Student ID : David Kriegman E-Mail : Assignment (Due date: 0/23/202). Perspective Projection [2pts] Consider a perspective projection where a point = z y x P

More information

Filtering Images. Contents

Filtering Images. Contents Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

EE 584 MACHINE VISION

EE 584 MACHINE VISION EE 584 MACHINE VISION Binary Images Analysis Geometrical & Topological Properties Connectedness Binary Algorithms Morphology Binary Images Binary (two-valued; black/white) images gives better efficiency

More information

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection COS 429: COMPUTER VISON Linear Filters and Edge Detection convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection Reading:

More information

Comparison of Vessel Segmentations Using STAPLE

Comparison of Vessel Segmentations Using STAPLE Comparison of Vessel Segmentations Using STAPLE Julien Jomier, Vincent LeDigarcher, and Stephen R. Aylward Computer-Aided Diagnosis and Display Lab, The University of North Carolina at Chapel Hill, Department

More information

Obtaining Feature Correspondences

Obtaining Feature Correspondences Obtaining Feature Correspondences Neill Campbell May 9, 2008 A state-of-the-art system for finding objects in images has recently been developed by David Lowe. The algorithm is termed the Scale-Invariant

More information

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important. Homogeneous Coordinates Overall scaling is NOT important. CSED44:Introduction to Computer Vision (207F) Lecture8: Camera Models Bohyung Han CSE, POSTECH bhhan@postech.ac.kr (",, ) ()", ), )) ) 0 It is

More information

Topic 6 Representation and Description

Topic 6 Representation and Description Topic 6 Representation and Description Background Segmentation divides the image into regions Each region should be represented and described in a form suitable for further processing/decision-making Representation

More information

Advanced Image Reconstruction Methods for Photoacoustic Tomography

Advanced Image Reconstruction Methods for Photoacoustic Tomography Advanced Image Reconstruction Methods for Photoacoustic Tomography Mark A. Anastasio, Kun Wang, and Robert Schoonover Department of Biomedical Engineering Washington University in St. Louis 1 Outline Photoacoustic/thermoacoustic

More information

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

Curve Subdivision in SE(2)

Curve Subdivision in SE(2) Curve Subdivision in SE(2) Jan Hakenberg, ETH Zürich 2018-07-26 Figure: A point in the special Euclidean group SE(2) consists of a position in the plane and a heading. The figure shows two rounds of cubic

More information

Scanner Parameter Estimation Using Bilevel Scans of Star Charts

Scanner Parameter Estimation Using Bilevel Scans of Star Charts ICDAR, Seattle WA September Scanner Parameter Estimation Using Bilevel Scans of Star Charts Elisa H. Barney Smith Electrical and Computer Engineering Department Boise State University, Boise, Idaho 8375

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

3D Geometry and Camera Calibration

3D Geometry and Camera Calibration 3D Geometry and Camera Calibration 3D Coordinate Systems Right-handed vs. left-handed x x y z z y 2D Coordinate Systems 3D Geometry Basics y axis up vs. y axis down Origin at center vs. corner Will often

More information

Rigid and Deformable Vasculature-to-Image Registration : a Hierarchical Approach

Rigid and Deformable Vasculature-to-Image Registration : a Hierarchical Approach Rigid and Deformable Vasculature-to-Image Registration : a Hierarchical Approach Julien Jomier and Stephen R. Aylward Computer-Aided Diagnosis and Display Lab The University of North Carolina at Chapel

More information

Midterm Exam Solutions

Midterm Exam Solutions Midterm Exam Solutions Computer Vision (J. Košecká) October 27, 2009 HONOR SYSTEM: This examination is strictly individual. You are not allowed to talk, discuss, exchange solutions, etc., with other fellow

More information

Fingerprint Classification Using Orientation Field Flow Curves

Fingerprint Classification Using Orientation Field Flow Curves Fingerprint Classification Using Orientation Field Flow Curves Sarat C. Dass Michigan State University sdass@msu.edu Anil K. Jain Michigan State University ain@msu.edu Abstract Manual fingerprint classification

More information

Character Recognition

Character Recognition Character Recognition 5.1 INTRODUCTION Recognition is one of the important steps in image processing. There are different methods such as Histogram method, Hough transformation, Neural computing approaches

More information

Chapter 11 Representation & Description

Chapter 11 Representation & Description Chain Codes Chain codes are used to represent a boundary by a connected sequence of straight-line segments of specified length and direction. The direction of each segment is coded by using a numbering

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Chapter 18. Geometric Operations

Chapter 18. Geometric Operations Chapter 18 Geometric Operations To this point, the image processing operations have computed the gray value (digital count) of the output image pixel based on the gray values of one or more input pixels;

More information

Math D Printing Group Final Report

Math D Printing Group Final Report Math 4020 3D Printing Group Final Report Gayan Abeynanda Brandon Oubre Jeremy Tillay Dustin Wright Wednesday 29 th April, 2015 Introduction 3D printers are a growing technology, but there is currently

More information

Lecture 7: Most Common Edge Detectors

Lecture 7: Most Common Edge Detectors #1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the

More information

Lecture 8: Registration

Lecture 8: Registration ME 328: Medical Robotics Winter 2019 Lecture 8: Registration Allison Okamura Stanford University Updates Assignment 4 Sign up for teams/ultrasound by noon today at: https://tinyurl.com/me328-uslab Main

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Blood Vessel Visualization on CT Data

Blood Vessel Visualization on CT Data WDS'12 Proceedings of Contributed Papers, Part I, 88 93, 2012. ISBN 978-80-7378-224-5 MATFYZPRESS Blood Vessel Visualization on CT Data J. Dupej Charles University Prague, Faculty of Mathematics and Physics,

More information

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides

More information

Multiple Model Estimation : The EM Algorithm & Applications

Multiple Model Estimation : The EM Algorithm & Applications Multiple Model Estimation : The EM Algorithm & Applications Princeton University COS 429 Lecture Dec. 4, 2008 Harpreet S. Sawhney hsawhney@sarnoff.com Plan IBR / Rendering applications of motion / pose

More information

Region-based Segmentation

Region-based Segmentation Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10 Announcements Assignment 2 due Tuesday, May 4. Edge Detection, Lines Midterm: Thursday, May 6. Introduction to Computer Vision CSE 152 Lecture 10 Edges Last Lecture 1. Object boundaries 2. Surface normal

More information

Correspondence. CS 468 Geometry Processing Algorithms. Maks Ovsjanikov

Correspondence. CS 468 Geometry Processing Algorithms. Maks Ovsjanikov Shape Matching & Correspondence CS 468 Geometry Processing Algorithms Maks Ovsjanikov Wednesday, October 27 th 2010 Overall Goal Given two shapes, find correspondences between them. Overall Goal Given

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

Image representation. 1. Introduction

Image representation. 1. Introduction Image representation Introduction Representation schemes Chain codes Polygonal approximations The skeleton of a region Boundary descriptors Some simple descriptors Shape numbers Fourier descriptors Moments

More information

A Non-Linear Image Registration Scheme for Real-Time Liver Ultrasound Tracking using Normalized Gradient Fields

A Non-Linear Image Registration Scheme for Real-Time Liver Ultrasound Tracking using Normalized Gradient Fields A Non-Linear Image Registration Scheme for Real-Time Liver Ultrasound Tracking using Normalized Gradient Fields Lars König, Till Kipshagen and Jan Rühaak Fraunhofer MEVIS Project Group Image Registration,

More information

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,

More information

Shape Modeling and Geometry Processing

Shape Modeling and Geometry Processing 252-0538-00L, Spring 2018 Shape Modeling and Geometry Processing Discrete Differential Geometry Differential Geometry Motivation Formalize geometric properties of shapes Roi Poranne # 2 Differential Geometry

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear

More information

Differential Geometry: Circle Patterns (Part 1) [Discrete Conformal Mappinngs via Circle Patterns. Kharevych, Springborn and Schröder]

Differential Geometry: Circle Patterns (Part 1) [Discrete Conformal Mappinngs via Circle Patterns. Kharevych, Springborn and Schröder] Differential Geometry: Circle Patterns (Part 1) [Discrete Conformal Mappinngs via Circle Patterns. Kharevych, Springborn and Schröder] Preliminaries Recall: Given a smooth function f:r R, the function

More information

Image Processing

Image Processing Image Processing 159.731 Canny Edge Detection Report Syed Irfanullah, Azeezullah 00297844 Danh Anh Huynh 02136047 1 Canny Edge Detection INTRODUCTION Edges Edges characterize boundaries and are therefore

More information

274 Curves on Surfaces, Lecture 5

274 Curves on Surfaces, Lecture 5 274 Curves on Surfaces, Lecture 5 Dylan Thurston Notes by Qiaochu Yuan Fall 2012 5 Ideal polygons Previously we discussed three models of the hyperbolic plane: the Poincaré disk, the upper half-plane,

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information