Markerless Real-Time Target Region Tracking: Application to Frameless Stereotactic Radiosurgery

Similar documents
Illumination Insensitive Template Matching with Hyperplanes

Gradient-Based Differential Approach for Patient Motion Compensation in 2D/3D Overlay

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences

A Radiometry Tolerant Method for Direct 3D/2D Registration of Computed Tomography Data to X-ray Images

Reduction of Metal Artifacts in Computed Tomographies for the Planning and Simulation of Radiation Therapy

A New Method for CT to Fluoroscope Registration Based on Unscented Kalman Filter

Automatic Generation of Shape Models Using Nonrigid Registration with a Single Segmented Template Mesh

Scaling Calibration in the ATRACT Algorithm

Intraoperative Prostate Tracking with Slice-to-Volume Registration in MR

Image Registration. Prof. Dr. Lucas Ferrari de Oliveira UFPR Informatics Department

Annales UMCS Informatica AI 1 (2003) UMCS. Registration of CT and MRI brain images. Karol Kuczyński, Paweł Mikołajczak

An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy

Motion artifact detection in four-dimensional computed tomography images

2D Rigid Registration of MR Scans using the 1d Binary Projections

ToF/RGB Sensor Fusion for Augmented 3-D Endoscopy using a Fully Automatic Calibration Scheme

Image processing and features

Iterative CT Reconstruction Using Curvelet-Based Regularization

Edge-Preserving Denoising for Segmentation in CT-Images

2D-3D Registration using Gradient-based MI for Image Guided Surgery Systems

Combination of Markerless Surrogates for Motion Estimation in Radiation Therapy

Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model

Real Time Tumor Motion Tracking with CyberKnife

Computational Medical Imaging Analysis Chapter 4: Image Visualization

Moving Metal Artifact Reduction for Cone-Beam CT (CBCT) Scans of the Thorax Region

Towards full-body X-ray images

Registration of 2D to 3D Joint Images Using Phase-Based Mutual Information

Comparison of Different Metrics for Appearance-model-based 2D/3D-registration with X-ray Images

Improvement and Evaluation of a Time-of-Flight-based Patient Positioning System

New Technology in Radiation Oncology. James E. Gaiser, Ph.D. DABR Physics and Computer Planning Charlotte, NC

Convolution-Based Truncation Correction for C-Arm CT using Scattered Radiation

8/3/2016. Image Guidance Technologies. Introduction. Outline

Utilizing Salient Region Features for 3D Multi-Modality Medical Image Registration

Overview of Proposed TG-132 Recommendations

Image-based Compensation for Involuntary Motion in Weight-bearing C-arm CBCT Scanning of Knees

Artefakt-resistente Bewegungsschätzung für die bewegungskompensierte CT

Respiratory Motion Compensation for C-arm CT Liver Imaging

Registration concepts for the just-in-time artefact correction by means of virtual computed tomography

Multi-modal Image Registration Using the Generalized Survival Exponential Entropy

INTRODUCTION TO MEDICAL IMAGING- 3D LOCALIZATION LAB MANUAL 1. Modifications for P551 Fall 2013 Medical Physics Laboratory

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation

Interactive Deformable Registration Visualization and Analysis of 4D Computed Tomography

Chapter 3 Image Registration. Chapter 3 Image Registration

Whole Body MRI Intensity Standardization

Determination of rotations in three dimensions using two-dimensional portal image registration

Local Image Registration: An Adaptive Filtering Framework

Chapters 1 7: Overview

Modern Medical Image Analysis 8DC00 Exam

Real-time self-calibration of a tracked augmented reality display

Respiratory Motion Estimation using a 3D Diaphragm Model

Projection-Based Needle Segmentation in 3D Ultrasound Images

Robert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method

Ensemble registration: Combining groupwise registration and segmentation

Image Acquisition Systems

FOREWORD TO THE SPECIAL ISSUE ON MOTION DETECTION AND COMPENSATION

Tomographic Reconstruction

Comparison of Reconstruction Methods for Computed Tomography with Industrial Robots using Automatic Object Position Recognition

Lecture 8: Registration

Ingo Scholz, Joachim Denzler, Heinrich Niemann Calibration of Real Scenes for the Reconstruction of Dynamic Light Fields

Combining Analytical and Monte Carlo Modelling for Industrial Radiology

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL

Automatic Lung Surface Registration Using Selective Distance Measure in Temporal CT Scans

Medical Image Processing: Image Reconstruction and 3D Renderings

Parametric Manifold of an Object under Different Viewing Directions

17th World Conference on Nondestructive Testing, Oct 2008, Shanghai, China

A Non-Linear Image Registration Scheme for Real-Time Liver Ultrasound Tracking using Normalized Gradient Fields

Humanoid Robotics. Projective Geometry, Homogeneous Coordinates. (brief introduction) Maren Bennewitz

Iterative Closest Point Algorithm in the Presence of Anisotropic Noise

Constructing System Matrices for SPECT Simulations and Reconstructions

The Template Update Problem

Outline. Introduction System Overview Camera Calibration Marker Tracking Pose Estimation of Markers Conclusion. Media IC & System Lab Po-Chen Wu 2

Optical Guidance. Sanford L. Meeks. July 22, 2010

Non-rigid 2D-3D image registration for use in Endovascular repair of Abdominal Aortic Aneurysms.

Occlusion Robust Multi-Camera Face Tracking

3D Guide Wire Navigation from Single Plane Fluoroscopic Images in Abdominal Catheterizations

Towards an Estimation of Acoustic Impedance from Multiple Ultrasound Images

2D Vessel Segmentation Using Local Adaptive Contrast Enhancement

Occluded Facial Expression Tracking

Mesh Based Interpolative Coding (MBIC)

On a fast discrete straight line segment detection

VALIDATION OF DIR. Raj Varadhan, PhD, DABMP Minneapolis Radiation Oncology

Three-dimensional nondestructive evaluation of cylindrical objects (pipe) using an infrared camera coupled to a 3D scanner

Robust Tumour Tracking From 2D Imaging Using a Population-Based Statistical Motion Model

Statistical Shape Model Generation Using Nonrigid Deformation of a Template Mesh

A simple method to test geometrical reliability of digital reconstructed radiograph (DRR)

Generation of Triangle Meshes from Time-of-Flight Data for Surface Registration

CoE4TN4 Image Processing. Chapter 5 Image Restoration and Reconstruction

Perception and Action using Multilinear Forms

Elastic registration of medical images using finite element meshes

DUE to beam polychromacity in CT and the energy dependence

Measurement of Pedestrian Groups Using Subtraction Stereo

Comparison of Probing Error in Dimensional Measurement by Means of 3D Computed Tomography with Circular and Helical Sampling

Design and performance characteristics of a Cone Beam CT system for Leksell Gamma Knife Icon

Quality control phantoms and protocol for a tomography system

A Simplified Onsite Image-Registration Approach for Radiosurgery by Partial CT

Optimal threshold selection for tomogram segmentation by reprojection of the reconstructed image

Translation Symmetry Detection: A Repetitive Pattern Analysis Approach

Ultrasonic Multi-Skip Tomography for Pipe Inspection

Discrete Estimation of Data Completeness for 3D Scan Trajectories with Detector Offset

DEVELOPMENT OF CONE BEAM TOMOGRAPHIC RECONSTRUCTION SOFTWARE MODULE

Transcription:

Markerless Real-Time Target Region Tracking: Application to Frameless Stereotactic Radiosurgery T. Rohlfing,, J. Denzler 3, D.B. Russakoff, Ch. Gräßl 4, and C.R. Maurer, Jr. Neuroscience Program, SRI International, 333 Ravenswood Avenue, Menlo Park, CA 945, USA Email: torsten@synapse.sri.com Department of Neurosurgery, Stanford University, 3 Pasteur Drive, Stanford, CA 9435, USA Email: {dbrussak, crmaurer}@stanford.edu 3 Universität Passau, Fakultät für Mathematik und Informatik, Innstraße 33, 943 Passau, Germany Email: denzler@cv.fmi.uni-passau.de 4 Lehrstuhl für Mustererkennung, Universität Erlangen, Martensstraße 3, 958 Erlangen, Germany Email: graessl@informatik.uni-erlangen.de Abstract Accurate and fast registration of intra-operative D projection images to 3D pre-operative images is an important component of many image-guided surgical procedures. If the D image acquisition is repeated several times during the procedure, the registration problem can be cast instead as a 3D tracking problem. To solve the 3D problem, we propose in this paper to apply a real-time D region tracking algorithm to first recover the components of the transformation that are in-plane to the projections. From the D motion estimates of all projections, a consistent estimate of the 3D motion is derived. We compare this method to computation in 3D and a combination of both. Using clinical data with a goldstandard transformation, we show that a standard tracking algorithm is capable of accurately and robustly tracking regions in x-ray projection images, and that the use of D tracking greatly improves the accuracy and speed of 3D tracking. Introduction The CyberKnife (Accuray, Inc., Sunnyvale, CA), shown in Fig., is a robotic frameless stereotactic radiosurgery system used in cancer therapy []. A pair of orthogonal flat-panel amorphous silicon detectors (ASDs) provides pairs of intra-operative x-ray images (Fig. ) with accurately known pro- Figure : CyberKnife radiosurgery system with () ceiling-mounted x-ray source, () flat-panel amorphous silicon detector (ASD), and (3) robotmounted therapy beam source. A second x-ray imaging system with ceiling-mounted source (not visible) and floor-mounted ASD (partly visible) is installed perpendicular to the first system. VMV 4 Stanford, USA, November 6 8, 4

Figure : X-ray projection images from two orthogonal directions with tracking ROI (rectangle) and uniformly distributed random tracking template points (white dots). The tracking ROI covers the cervical vertebra of interest, plus its adjacent vertebrae on either side. A magnified image of the tracking ROI from camera A is shown in Fig. 3. jection geometries. These images are registered to a pre-operative three-dimensional (3D) computed tomography (CT) image, in which the treatment target (typically a tumor) and therapy beams have been defined. The registration determines the current patient pose so that the therapy beams are accurately aligned with their planned position and orientation with respect to the target. Acquisition of the x-ray images is repeated periodically to follow the patient s motion over time and adapt the targeting accordingly. With each new pair of x-ray images, this requires a new registration to the CT image, which is time consuming and, in the presence of large motion, not very robust. Much of the 3D motion can be deduced from the motion of objects seen in multiple two-dimensional (D) projection images, e.g., in our application a pair of orthogonal projections. This principle has been applied in numerous works (see Ref. [] for a survey). For radiosurgery treatment of the spine, the only method used in clinical practice requires bone-implanted markers that can be easily identified and efficiently tracked in the projection images. The implantation of markers requires a separate intervention and, although minimally invasive, causes surgical trauma to the patient and increases the risk of complications (e.g., infections). In this paper we apply a markerless real-time D region tracking algorithm [3] to obtain a prediction of the 3D transformation. This prediction, which in the work presented in this paper is limited to translational motion, can then be refined by a full D- 3D registration. After motion prediction, the registration is started in close proximity of the correct transformation, which improves both its accuracy and its computational efficiency. We evaluate our method with clinical data from a patient treated for a spinal tumor with the CyberKnife radiosurgery system, but the technique itself is applicable to other treatment systems and clinical applications. This paper, to the best of our knowledge, is the first to apply markerless D region tracking in x- ray projection images for 3D target tracking. Other groups have previously suggested using D in-plane transformations to speed up the D-3D registration process by pre-computing out-of-plane digitally reconstructed radiograph (DRR) images and applying D in-plane transformations to them during the registration [4]. Our method is different in that it takes the opposite approach. We reverse the direction of inference by directly estimating the 3D transformation from the observed D motion. Our technique

can thereby take advantage of the full x-ray image resolution, as well as the real-time performance of the D region tracking algorithm. Methods The objective of D-3D registration is to determine a (rigid) coordinate transformation that maps the physical space coordinates to the patient coordinates as defined by the pre-operative CT image. At discrete time k, we denote this transformation by T (k). Also, let P (k) A denote the k-th projection image from detector A (i.e., frame k in the projection image sequence). Let P (k) B denote these images from detector B. For the CyberKnife system, each projection image has 5 5 pixels with a pixel size of.4 mm (Fig. ).. D Region Tracking We use an independent implementation [5] of the hyperplane tracking algorithm introduced by Jurie & Dhomes [3]. The tracker is trained on a manually drawn region of interest (ROI) in frame. In our application for spinal procedures, the ROI covers the target vertebra and its adjacent vertebrae on either side (Fig. 3). Larger ROIs would potentially cause problems due to the nonrigid motion of the spine as a whole, i.e., motion of the vertebrae relative to each other. The ROI we use is typically pixels. The hyperplane tracking algorithm is based on a data-driven template matching approach. After the specification of the ROI in the first image of the sequence, the position of this template is successively computed in the following images. The reference template is represented by a vector r = ( x,..., x N) T, which contains the D coordinates x i = (x i, y i) T of the template points. For the present paper, we use 6 uniformly distributed random locations within the tracked ROI, which we have found to produce accurate results while maintaining real-time performance. The gray-level intensity of a point x i in frame k is given by f( x i, k). Consequently, vector f( r, k) contains the intensities of template r in frame k. For the purpose of the present paper, we limit the tracker to pure translational motion, although it is capable of tracking true affine motion including anisotropic scale factors and shear. The trans- Figure 3: Tracking region for x-ray camera A with 6 uniformly distributed random template points (white dots). Note that their distribution was independent of image features and in particular did not focus on the implanted fiducial markers used for validation. formation of the reference template can therefore be modeled by r k = g r, x k, where x k = ( x k, y k ) T contains the translation parameters and g(, ) is the function that applies the translation to the template point coordinates. Consequently, template matching can be described as computing the translation parameters x k that minimize the least-square intensity difference between the reference template and the current template. To reduce the computational cost of a non-linear optimization, Refs. [3, 6] use a first-order approximation x k+ = x k + A i k+ () with the error vector i k+ = f ( r, k ) f g r, x k, k +. There are two approaches for computing the matrix A in Eq. (). Hager & Belhumeur [6] proposed using a Taylor approximation. Jurie & Dhomes [3] use an initialization stage (i.e., training step) where a number of random motions are simulated and are used to estimate matrix A by a least-squares estimation. Note that this initialization needs to be performed only for the first frame in the image sequence. For the work described in this paper, we

use the hyperplane approach [3], due to its superior basin of convergence.. 3D Motion Estimation The projection geometry and mathematical symbols used below are illustrated in Fig. 4. For two projections, let x A be the normalized (i.e., x A = ) 3D direction vector of detector plane A in the x pixel direction. Analogously let y A be the normalized vector in the y pixel direction of detector A, as well as x B and y B for detector B. In our application, the direction vectors are invariant over time, as the projection imaging devices of the CyberKnife are installed in fixed locations. However, this is not a requirement of the proposed method... Motion Backprojection The result of the tracking for any given frame is a pair of D translation vectors t A and t B, which quantify the in-plane motion in projection images P A and P B, respectively. From these and the detector orientations we can compute the 3D motion of the tracked pattern as d A = c A ˆ` xa y A t A and d B = c B ˆ` xb y B t B. () The 3 matrices ( x A y A) and ( x B y B) rotate the D translation vectors t A and t B, respectively, from the D x-ray image coordinate system to the 3D treatment room coordinate system. The coefficients c A and c B are linear scaling factors that take into account the perspective effect of the x-ray projection. For projection A this factor is c A = f A (f A d A). (3) For projection B, the scaling factor c B is computed accordingly. Note that Eq. (3) is only correct on the central (orthogonal) projection ray, at a distance d A from the projection plane. However, for the large focal length in our application (f A/B 3,8 mm vs. mm x-ray field of view) the approximation is sufficiently accurate in the entire CT image volume... Consistent Motion Estimation Since for two or more projection geometries not all of the detector orientations are orthogonal in 3D, we have to compensate for multiple contributions along the same directions. For that, let M be the matrix that contains all projection plane direction vectors as its columns, i.e., M = ` x A y A x B y B. (4) Let e x = (,, ), e y = (,, ), and e z = (,, ) be the x, y, and z unit vectors, respectively. Then the diagonal matrix s x N = @ s y A (5) s z with diagonal elements s x = e x T MM T e x, s y = e y T MM T e y, s z = e z T MM T e z (6) contains the accumulated contributions along the x, y, and z directions (see the Appendix for a derivation of N). Using N and the 3D in-plane translation vectors d A and d B, we can obtain a consistent 3D translation estimate as T = ( d A + d B)N. (7) As a concrete example, consider the projection geometries of the CyberKnife, which provided the data for evaluation later in this paper. The two ASD devices of the CyberKnife have the following direction vectors: for projection A and x A = (,, ), y A = (, p, p ), x B = (,, ), y B = (, p, p ) for projection B. These yieldn = diag(,, ), so when combining the motion estimates from the two projections using Eq. (7), the contributions along the parallel (in 3D) x pixel axes of both projections are averaged. The contributions from the y axes of the projection planes, which are orthogonal with respect to each other and with respect to the x axes, are taken as they are. This is precisely what one would intuitively expect.

CT Image x X-ray Source d Projection Plane t y Focal Length f Object-to-Projection Plane Distance d Figure 4: Projection geometry and notation. The focal length f is the distance between the x-ray source and the projection plane. The object-to-projection plane distance d is the distance between the center of the CT image and the projection plane. The projection plane is spanned in 3D by the vectors x and y. The D translation vector t from tracking is back projected to yield the 3D translation vector d...3 3D Transformation Update With the 3D motion estimate T (k) from time to time k computed from the in-plane motion as described, the estimated transformation from physical space to patient coordinates is computed by applying the motion estimate to the reference transformation at time : T (k) = T (k) T (). (8).3 D-3D Registration We perform D-3D registration with an intensitybased method based on the computation of DRR images [7]. These simulated x-ray projections are computed by ray casting from the pre-operative CT image and compared to the actual x-ray images. The pose of the CT image is adjusted by an optimization algorithm until the similarity of simulated and actual projection images is maximized. We use normalized mutual information [8] as the similarity measure. In order to speed up DRR computation, we use progressive attenuation fields [9], a recently introduced method for dynamically caching and reusing projection values..4 Evaluation We apply the methods proposed in this paper to image data from a patient treated for a spinal tumor using the CyberKnife radiosurgery system. The true coordinate transformations between physical space and pre-operative image coordinates are known from implanted fiducial markers [7]. For the purpose of this evaluation, we assume that the correct transformation (i.e., the gold standard) between physical space and patient coordinates at time k = is known. Let this transformation be denoted T () gold. For the subsequent times k > we estimate transformations T (k) using each of the following three frame-to-frame 3D tracking methods:. D-3D registration of the CT image to the next x-ray projection image frames,. 3D motion estimation from D region tracking in the x-ray projection images, and 3. 3D motion estimation from tracking followed by a D-3D registration, where the output of the 3D motion estimation serves as the starting point for the D-3D registration. The accuracy of the estimated transformation is then computed as the target registration error

(TRE) [] relative to the respective gold-standard transformation at time k, i.e., T (k) gold. The TRE itself is computed as the root-mean-square (rms) difference between coordinates in some region V mapped using the estimated transformation vs. those mapped using the gold-standard transformation: TRE (k) = V X x V T (k) ( x) T (k) gold ( x). (9) The region V is the target volume of the surgical procedure. In this study, it is the manually defined bounding box of the vertebra targeted during radiosurgery. For comparison, we also compute the uncorrected TRE, that is, the TRE without any motion correction. The uncorrected TRE uses the goldstandard transformation for frame k = as the reference, which is based on the assumption that the initial position of the patient is known perfectly. For all subsequent frames k >, the uncorrected TRE, which is identical to the actual patient motion, m (k) in the target volume relative to frame is then computed as the rms difference of the gold-standard transformations at time k and time : m (k) = V 3 Results X x V T (k) gold ( x) T() gold ( x). () The distribution of TRE values between frame and each of the 9 subsequent frames is plotted in Fig. 5. All three motion compensation methods effectively track 3D motion and thus reduce the registration error. However, both motion estimation from D tracking and registration after tracking clearly outperform registration alone. It appears, furthermore, that D tracking alone performs better than tracking followed by a D-3D registration step. Statistical analysis (two-sided paired t-test) shows that TRE values after tracking and registration are significantly lower than after registration alone (P <.5). Likewise, TRE values after tracking alone are significantly lower than after registration alone (P <.5). The difference between tracking alone and tracking plus registration is not statistically significant. Target Registration Error (mm).5..5..5. Uncorrected Registration Tracking Tracking and Registration Figure 5: Box-and-whisker plot of distribution of target registration errors between frame and subsequent frames using the three methods described in Section.4. For comparison, the leftmost box plot shows the uncorrected errors, which is the actual patient motion. The small squares show the median values, the horizontal bars show the mean values. The lower and upper ends of the boxes correspond to the 5 th and 75 th percentiles, respectively. The whiskers show the range of values between minimum and maximum. It is interesting to compare D tracking and tracking plus registration in more detail. The evolution of TRE values by frame over time is shown in Fig. 6. First, compare the graphs for D tracking (C) and tracking with registration (D). It is clear that, while generally more accurate than registration, tracking errors occasionally increase (past frame #4) while the errors of tracking and registration combined remain stable over time. Between registration alone and registration preceded by D tracking (graph (B) vs. graph (D)), the tracking appears to improve registration accuracy in particular for frames with larger patient motion (e.g., frames #3 and #9). The computation times from frame to frame of the three motion correction methods are compared in Fig. 7. First estimating 3D motion with a tracking step substantially reduces the time spent on D-3D registration. The mean CPU time for registration per frame is 9 s with tracking, compared to 6 s per frame without tracking. Note that tracking itself takes about 4 s for the first frame (training of the hyperplane tracker), and about /3 s for the subsequent frames. All times were obtained using a PC with a 3 GHz Intel Pentium 4 CPU.

Target Registration Error (mm) 3 3 3 3 A B C D 4 6 8 4 6 8 Frame Uncorrected Registration Tracking Tracking and Registration Figure 6: Plots of target registration errors over time between frame and subsequent frames. (A) Errors without motion correction. (B) Errors with D-3D registration. (C) Errors with D tracking. (D) Errors with D tracking and D-3D registration. In (B) through (D), the curve of uncorrected errors (actual patient motion) is plotted in dots as a reference. 4 Discussion This paper, to the best of our knowledge, is the first to propose markerless real-time D region tracking to estimate 3D patient motion during imageguided procedures. Our initial results on clinical data from a spinal radiosurgery procedure show that our method is accurate and fast. We have also shown that it can be combined with intensity-based D-3D registration and improves both accuracy and computational efficiency of the latter. The D tracking can take advantage of the full resolution of the x-ray projection images (.4 mm pixel size), while the D-3D registration is essentially limited by the resolution of the pre-operative CT image (.5 mm slice thickness) and its potential artifacts, e.g., from respiratory motion. On the CPU Time per Frame (s) 5 4 3 Tracking Registration Tracking and Registration Figure 7: Plots of computation time from frame to frame. Note that D tracking (left box) is several orders of magnitude faster than D-3D registration (4 s for first frame, /3 s for subsequent frames). other hand, D tracking cannot correctly identify components of the 3D transformation that are out of plane for the respective projection. In its current form, our method cannot predict any 3D rotations, even though the tracking algorithm is capable of detecting D rotations. Also, changes in the tracked region due to out-of-plane components can potentially interfere with the correct estimation even of the in-plane motion components. The intensity-based D-3D registration does not suffer from limitations due to out-of-plane motion. In future work, however, we plan to estimate rotation components of the 3D transformation consistently from in-plane rotations of the projection images. Using occasional re-initialization of the tracker after rotations have exceeded a maximum threshold, we hope to also make the tracker robust to changes of the tracked features due to out-ofplane rotations (because x-ray images are line integrals of attenuation coefficients encountered along rays from the x-ray source to the detector, the x-ray image features used by the D tracker can change with rotation). For mutually orthogonal projections (up to three in 3D), the extension of our method to 3D rotations as well is straight forward. In this special case, each projection provides a rotation estimate that is in-plane with respect to itself and entirely out-of-plane for the other projections. These estimates can be combined consistently by successively applying them to the 3D volume, while taking into consideration that all but the first rotation must rotate around axes rotated according to the preceding rotation(s). Implementing and evaluating this ro-

tation estimation will also be the subject of future work on this project. Ultimately, we would like to develop a framework to consistently combine mixtures of in-plane and out-of-plane rotations, which could be applied to arbitrary numbers of projections that are not mutually orthogonal. Acknowledgment Daniel Russakoff and Calvin Maurer received support from the Interdisciplinary Initiatives Program, which is part of the Bio-X Program at Stanford University, under the grant Image-Guided Radiosurgery for the Spine and Lungs. Christoph Gräßl received support from the European Commission 5th IST Program Project VAMPIRE. Only the authors are responsible for the content. This research was performed as part of a collaboration established with support from the Bavaria California Technology Center (BaCaTec), principal investigators Joachim Denzler and Torsten Rohlfing. A Normalization Matrix In this appendix, we derive the normalization matrix that takes into account contributions from multiple projection planes with directions that are not all mutually orthogonal. Let δ i for i =,..., N be the normalized (i.e., δ i = ) direction vectors of N projection image planes in 3D. When all these vectors are added, the accumulated contribution in direction of the positive x dimension is s x = N X i= δ i, e x T =» e ( δ ) T x δ δ N B @ C. A ex ( δ N ) T = e x T MM T e x () where M is defined analogous to Eq. (4). Likewise, the contributions along y and z directions can be expressed. With these, the matrix that normalizes the sum of all directions to unity is s x @ s y A = N () s z with N defined as in Eq. (5). References [] S.D. Chang, W. Main, D.P. Martin, et al. An analysis of the accuracy of the CyberKnife: A robotic frameless stereotactic radiosurgical system. Neurosurgery, 5():4 47, 3. [] M. Murphy. Tracking moving organs in real time. Semin Radiat Oncol, 4():9, 4. [3] F. Jurie, M. Dhome. Hyperplane approximation for template matching. IEEE Trans Pattern Anal Machine Intell, 4(7):996,. [4] D. Sarrut, S. Clippe. Geometrical transformation approximation for D/3D intensitybased registration of portal images and CT scan. In Proc. Medical Image Computing and Computer-Assisted Intervention, vol. 8 of LNCS, pp. 53 54, Heidelberg,. Springer-Verlag. [5] C. Gräßl, T. Zinßer, H. Niemann. Illumination insensitive template matching with hyperplanes. In Proc. Pattern Recognition 5th DAGM Symposium, vol. 78 of LNCS, pp. 73 8, Heidelberg, 3. Springer-Verlag. [6] G.D. Hager, P.N. Belhumeur. Efficient region tracking with parametric models of geometry and illumination. IEEE Trans Pattern Anal Machine Intell, ():5 39, 998. [7] D.B. Russakoff, T. Rohlfing, A. Ho, et al. Evaluation of intensity-based D-3D spine image registration using clinical gold-standard data. In Proc. of Biomedical Image Registration nd International Workshop, vol. 77 of LNCS, pp. 5 6, Heidelberg, 3. Springer-Verlag. [8] C. Studholme, D.L.G. Hill, D.J. Hawkes. An overlap invariant entropy measure of 3D medical image alignment. Pattern Recognit, 3():7 86, 999. [9] T. Rohlfing, D.B. Russakoff, J. Denzler, et al. Progressive attenuation fields: Fast D- 3D image registration without precomputation. In Proc. of Medical Image Computing and Computer-Assisted Intervention, LNCS, Heidelberg, 4. Springer-Verlag. In press. [] J.M. Fitzpatrick, J.B. West, C.R. Maurer, Jr. Predicting error in rigid-body, point-based registration. IEEE Trans Med Imag, 7(5):694 7, 998.