Overview of Proposed TG-132 Recommendations Kristy K Brock, Ph.D., DABR Associate Professor Department of Radiation Oncology, University of Michigan Chair, AAPM TG 132: Image Registration and Fusion
Conflict of Interest I have a licensing agreement for deformable image registration technology with RaySearch Laboratories.
Clinical Recommendations (1/2) 1. Understand the basic image registration techniques and methods of visualizing image fusion 2. Understand the basic components of the registration algorithm used clinically to ensure its proper use 3. Perform end-to-end tests of imaging, registration, and planning/treatment systems if image registration is performed on a standalone system
Clinical Recommendations (2/2) 4. Perform comprehensive commissioning of image registration using the provided digital phantom data (or similar data) as well as clinical data from the user s institution 5. Develop a request and report system to ensure communication and documentation between all users of image registration 6. Establish a patient specific QA practice for efficient evaluation of image registration results
Commissioning and QA Understand the whole picture Understand fundamental components of algorithm
Understand the basic image registration techniques and methods of visualizing How? image fusion TG report has basic information and references AAPM Virtual Library Several books and review papers
Why? Many Image Registration Techniques Metric Transformation Optimization Your Eye Translation Brain-power Least Squares (Points) Translation + Rotation Simplex Chamfer Matching (surface matching) Contour matching Affine (Translation + Rotation + scaling + shearing) Mean Square Difference Spline (B-spline, Thin Good for same plate modality spline) (x-ray), different Correlation Coefficient contrast/noise Physical (CECT, (optical/fluid CT, CBCT) Works flow, for elastic Multi-body) Mutual Information Quick, Easy, local Surface-based Manual or autosegmentation Great for 4D CT Modality Biomechanical Gradient descent etc
Mutual Information Maximise the mutual information Marginal Entropies Joint Entropy Mutual Information, I(A,B) H(A) H(B) H(A,B) Sensitivity of results: Vary the vector field and evaluate the change in similarity metric Hub, et. al., IEEE TMI 2009
How Reliable is the Max MI? Actually, min -MI -MI -MI dx Min MI Best Solution dx Min MI Best Solution
Intensity Variation: Impact on CC/MSD Clear intensity variation No relevant intensity variation, noise/artifact
1. Measuring the similarity of alignment of multi-modality images is complex, typically requiring the use of 4% A. Sum of the Squared Differences (SSD) 3% 85% 6% 2% B. Guessing (G) C. Mutual Information (MI) D. Mean Squared Difference (MSD) E. Cubed Subtracted Less One (CSLO)
1. Measuring the similarity of alignment of multi-modality images is complex, typically requiring the use of A. Sum of the Squared Differences (SSD) B. Guessing (G) C.Mutual Information (MI) D. Mean Squared Difference (MSD) E. Cubed Subtracted Less One (CSLO) REFERENCE: P. Viola, W.M. Wells, Alignment by maximization of mutual information, International Journal of Computer Vision, 24 (1997), pp. 137 154
Understand the basic components of the registration algorithm used clinically to ensure its proper use How? At minimum, the vendor should disclose: Similarity metric used Why do we need to know the Regularization used Transformation used Optimization method What knobs you can turn and what they do Read white papers implementation?
New method to validate Deformable Image Registration Deformable 3D Presage dosimeters Control (No Deformation) Deformed (27% Lateral Compression) Slides Courtesy of Mark Oldham and Shiva Das
Dosimeter & Deformable Registration-based Dose Accumulation: Dose Distributions Deformed Dosimeter Field Shape Differences DVF-based Accumulation Caution must be used when Field Displacements accumulating dose, especially in regions of the image with homogeneous intensity. Horizontal (Compression Axis) 40% narrower to 175% wider Vertical 33% shorter to 50% taller Slides Courtesy of Mark Oldham and Shiva Das
Different DIR Algorithms have Different Strengths and Weaknesses Distribution Coronal Axial Sagittal 3D γ 3%/3mm Measured, Optical CT 96% 1 (control) DIR-predicted, Intensity-based DIR 60% 1 DIR-predicted, Biomechanical Surface projection 91% 2 1. Juang. IJROBP 2013;87(2): 414-421 2. M Velec, et al, PRO, 2015
Commissioning and QA Understand the whole picture Phantom approach to understand characteristics of Understand algorithm fundamental implementation components of algorithm
Perform end-to-end tests of imaging, registration, and planning/treatment systems if image registration is performed on a stand-alone system How? Any simple phantom or solid water Why? It s already mandated
Why Virtual Phantoms Known attributes (volumes, offsets, deformations, etc.) Testing standardization we all are using the same data Geometric phantoms quantitative validation Anthropomorphic realistic and quantitative
Rigid Geometric Data Helps us to learn the impact of the knobs of the registration Validation of most straightforward case Similar to 20x20 field profile * Phantom Data Courtesy of ImSim QA
Example Commissioning Tests
Rigid Anatomical Phantom Multi-Modality Translation Offset 1 additional (simple) layer of complexity
Deformable Phantom Commissioning Procedure: Run Deformable Image Registration Export DICOM Deformation Vector Field (DVF) Pseudo code provided to compare known DVF with exported DVF Target: 95% of voxels within 2 mm, max error less than 5 mm
Deformable Lung Clinical Lung Data Simulated Deformed Lung *Courtesy DIR-lab, Dr. Castillo
Target Tolerances Stationary Image Moving Image Test Tolerance All Datasets Voxel Intensity Exact Basic Phantom Dataset - 2 Each modality image in Basic Phantom Dataset 1 Orientation Rigid Registration Translation Only Exact Maximum cardinal direction error less than 0.5*voxel dimension Basic Phantom Dataset 3 Each modality image in Basic Phantom Dataset 1 Rigid Registration Translation and Rotation Maximum cardinal direction error less than 0.5*voxel dimension Basic Anatomical Dataset - 1 Basic Anatomical Dataset - 2 Registration translation only Basic Anatomical Dataset - 1 Basic Anatomical Dataset - 3 Registration translation only Basic Anatomical Dataset - 1 Basic Anatomical Dataset - 4 Registration translation only Basic Anatomical Dataset - 1 Basic Anatomical Dataset - 5 Registration translation only Maximum cardinal direction error less than 0.5*voxel dimension size Maximum cardinal direction error less than 0.5*voxel dimension size Maximum cardinal direction error less than 0.5*voxel dimension size Maximum cardinal direction error less than 0.5*voxel dimension size Basic Anatomical Dataset - 1 Basic Deformation Dataset - 1 Deformable Registration 95% of voxels within 2 mm Sliding Deformation Dataset - 1 Sliding Deformation Dataset - 2 Deformable Registration max error less than 5 mm 95% of voxels within 2 mm Clinical 4DCT dataset (Deformation can be processed in either direction) Deformable Registration Max error less than 5 mm Mean vector error of all landmark points less than 2 mm Max error less than 5 mm
Validation Tests and Frequencies Frequency Quality Metric Tolerance Acceptance and Commissioning Annual or Upon Upgrade System end-to-end tests Data Transfer using physics phantom Rigid Registration Accuracy (Digital Phantoms, subset) Deformable Registration Accuracy (Digital Phantoms, subset) Example clinical patient case verification Accurate Baseline Baseline Baseline
Commissioning and QA Understand the whole picture Phantom approach to understand characteristics of Understand algorithm fundamental Quantitative implementation components of Validation of algorithm Clinical Images
What Tools Do we Have? Visual Verification: Excellent tool for established techniques. Not enough for commissioning!
Quantitative Validation Techniques Landmark Based Does the registration map a landmark on Image A to the correct position on Image B? Target Registration Error (TRE) Contour Based Does the registration map the contours onto the new image correctly? Dice Similarity Coefficient (DSC) Mean Distance to Agreement (MDA) Digital/Physical Phantoms Compare known motion with registration results
Landmark Based (TRE) A B A TRE CT: 512x512x152; 0.09 cm in plane, 0.25 cm slice; GE scanner; 4D CT with Varian RPM Reproducibility of point identification is sub-voxel Gross errors Quantification of local accuracy within the target Increasing the number increases the overall volume quantification Manual technique Can identify max errors
That sounds great! Is that enough?
Accuracy of Points 1 cm X X X RMS = 0.3 mm
Points Don t Tell the Whole Story 1 cm X X X
2. Target registration error (TRE) is defined as the 1. Uncertainty in selecting landmarks on an image 2. Average residual error between the identified points on Study B and the points identified on Study A, mapped onto Study A through image registration 3. Improvement in accuracy when using deformable registration over rigid registration 4. Volume overlap of 2 contours on registered images 5. Mean surface distance between 2 contours on registered images 3% 72% 5% 5% 16% 1. 2. 3. 4. 5.
2. Target registration error (TRE) is defined as the A. Uncertainty in selecting landmarks on an image B. Average residual error between the identified points on Study B and the points identified on Study A, mapped onto Study A through image registration C. Improvement in accuracy when using deformable registration over rigid registration D. Volume overlap of 2 contours on registered images E. Mean surface distance between 2 contours on registered images REFERENCE: Fitzpatrick, J.M., J.B. West, and C.R. Maurer, Jr., Predicting error in rigid-body point-based registration. IEEE Trans Med Imaging, 1998. 17(5): p. 694-702.
Commissioning and QA Understand the whole picture Phantom approach to understand characteristics of Understand algorithm fundamental Quantitative implementation components of Validation of Documentation algorithm Clinical Images and Evaluation in Clinical Environment
Request Clear identification of the image set(s) to be registered Identification of the primary (e.g. reference) image geometry An understanding of the local region(s) of importance The intended use of the result Target delineation Techniques to use (deformable or rigid) The accuracy required for the final use
Report Identify actual images used Indicate the accuracy of registration for local regions of importance and anatomical landmarks Identify any critical inaccuracies to alert the user Verify acceptable tolerances for use Techniques used to perform registration Fused images in report with annotations Documentation from system used for fusion
Establish a patient specific QA practice for efficient evaluation of image registration results Why? At this point we are still understanding how the the registration is performing on different types of patients How? Visual Verification Spot checks of landmarks Boundary comparison
Vendor Recommendations 1. Disclose basic components of their registration algorithm to ensure its proper use 2. Provide the ability to export the registration matrix or deformation vector field for validation 3. Provide tools to qualitatively evaluate the image registration 4. Provide the ability to identify landmarks on 2 images and calculate the TRE from the registration 5. Provide the ability to calculate the DSC and MDA between the contours defined on an image and the contours mapped to the image via image registration 6. Support the integration of a request and report system for image registration
TG-132 Product Guidelines for understating of clinical tools Digital (virtual) phantoms Recommendations for commissioning and clinical implementation Recommendations for periodic and patient specific QA/QC Recommendations for clinical processes