Computational Medical Imaging Analysis Chapter 5: Processing and Analysis

Similar documents
Digital Image Processing

Introduction to Medical Image Processing

Fundamentals of Digital Image Processing

MR IMAGE SEGMENTATION

8/3/2017. Contour Assessment for Quality Assurance and Data Mining. Objective. Outline. Tom Purdie, PhD, MCCPM

EE795: Computer Vision and Intelligent Systems

Norbert Schuff VA Medical Center and UCSF

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Computational Medical Imaging Analysis Chapter 4: Image Visualization

Biomedical Image Analysis. Mathematical Morphology

Modern Medical Image Analysis 8DC00 Exam

Image Registration. Prof. Dr. Lucas Ferrari de Oliveira UFPR Informatics Department

ECG782: Multidimensional Digital Signal Processing

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

Image Segmentation. Ross Whitaker SCI Institute, School of Computing University of Utah

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University

COMPUTER AND ROBOT VISION

Image Segmentation. Ross Whitaker SCI Institute, School of Computing University of Utah

Biomedical Image Processing

An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy

PROCESS > SPATIAL FILTERS

Image Segmentation and Registration

Machine Learning for Medical Image Analysis. A. Criminisi

Functional MRI in Clinical Research and Practice Preprocessing

CHAPTER 2. Morphometry on rodent brains. A.E.H. Scheenstra J. Dijkstra L. van der Weerd

Segmentation of Images

Object Identification in Ultrasound Scans

SPM8 for Basic and Clinical Investigators. Preprocessing. fmri Preprocessing

Automated segmentation methods for liver analysis in oncology applications

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

DIGITAL IMAGE ANALYSIS. Image Classification: Object-based Classification

Spectral Classification

1 Introduction Motivation and Aims Functional Imaging Computational Neuroanatomy... 12


Basic fmri Design and Analysis. Preprocessing

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Image Processing. Bilkent University. CS554 Computer Vision Pinar Duygulu

MEDICAL IMAGE ANALYSIS

Region-based Segmentation

CHAPTER-4 LOCALIZATION AND CONTOUR DETECTION OF OPTIC DISK

Segmenting Lesions in Multiple Sclerosis Patients James Chen, Jason Su

Classification of Subject Motion for Improved Reconstruction of Dynamic Magnetic Resonance Imaging

EPI Data Are Acquired Serially. EPI Data Are Acquired Serially 10/23/2011. Functional Connectivity Preprocessing. fmri Preprocessing

Computational Neuroanatomy

SECTION 5 IMAGE PROCESSING 2

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

Lecture 4: Spatial Domain Transformations

Global Journal of Engineering Science and Research Management

ECG782: Multidimensional Digital Signal Processing

The organization of the human cerebral cortex estimated by intrinsic functional connectivity

RIGID IMAGE REGISTRATION

Supplementary methods

Topic 6 Representation and Description

RT_Image v0.2β User s Guide

The Insight Toolkit. Image Registration Algorithms & Frameworks

K-Means Clustering Using Localized Histogram Analysis

Operators-Based on Second Derivative double derivative Laplacian operator Laplacian Operator Laplacian Of Gaussian (LOG) Operator LOG

Image Processing, Analysis and Machine Vision

Technical aspects of SPECT and SPECT-CT. John Buscombe

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

ECG782: Multidimensional Digital Signal Processing

Data: a collection of numbers or facts that require further processing before they are meaningful

Tumor Detection and classification of Medical MRI UsingAdvance ROIPropANN Algorithm

Part 3: Image Processing

Chapter 3: Intensity Transformations and Spatial Filtering

Image Segmentation. Shengnan Wang

MEDICAL IMAGE COMPUTING (CAP 5937) LECTURE 4: Pre-Processing Medical Images (II)

SPM8 for Basic and Clinical Investigators. Preprocessing

Spatio-Temporal Registration of Biomedical Images by Computational Methods

Multimodality Imaging for Tumor Volume Definition in Radiation Oncology

Medicale Image Analysis

EE 701 ROBOT VISION. Segmentation

Registration-Based Segmentation of Medical Images

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

Learning-based Neuroimage Registration

Mutual Information Based Methods to Localize Image Registration

Supervised vs. Unsupervised Learning

Morphological Image Processing

Is deformable image registration a solved problem?

Detection of Edges Using Mathematical Morphological Operators

Where are we now? Structural MRI processing and analysis

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

Prostate Detection Using Principal Component Analysis

The Anatomical Equivalence Class Formulation and its Application to Shape-based Computational Neuroanatomy

x' = c 1 x + c 2 y + c 3 xy + c 4 y' = c 5 x + c 6 y + c 7 xy + c 8

NIH Public Access Author Manuscript Proc Soc Photo Opt Instrum Eng. Author manuscript; available in PMC 2014 October 07.

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

Mathematical Morphology and Distance Transforms. Robin Strand

Broad field that includes low-level operations as well as complex high-level algorithms

SISCOM (Subtraction Ictal SPECT CO-registered to MRI)

2: Image Display and Digital Images. EE547 Computer Vision: Lecture Slides. 2: Digital Images. 1. Introduction: EE547 Computer Vision

Robot vision review. Martin Jagersand

Medical Image Registration by Maximization of Mutual Information

Performance Evaluation of the TINA Medical Image Segmentation Algorithm on Brainweb Simulated Images

An Introduc+on to Mathema+cal Image Processing IAS, Park City Mathema2cs Ins2tute, Utah Undergraduate Summer School 2010

Edge and local feature detection - 2. Importance of edge detection in computer vision

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS

identified and grouped together.

Medical Image Registration

Transcription:

Computational Medical Imaging Analysis Chapter 5: Processing and Analysis Jun Zhang Laboratory for Computational Medical Imaging & Data Analysis Department of Computer Science University of Kentucky Lexington, KY 40506 Chapter 5: CS689 1

5.1a: Challenges in Comprehending Information in Biomedical Images Image enhancement and restoration Automated and accurate segmentation of structures and features of interest Automated and accurate registration and fusion of multimodality or multispectral information Classification of image content, namely tissue characterization and typing Quantitative measurement of image properties and features, including a discussion of the meaning of image measurement Chapter 5: CS689 2

5.2a: Image Enhancement and Restoration Image enhancement methods attempt to improve the quality of an image We can make certain features in the image more recognizable or prominent Examples are amplification of edges or reduction of noise to increase the contrast between regions of an image It is also possible to increase the visibility of features at a certain scale or with a certain spectral signature Chapter 5: CS689 3

5.2b: Tradeoffs in Detail and Noise There is an inherent tradeoff between amplifying detail and reducing noise when applying image enhancement techniques Procedures that enhance visibility of detail also increase the noise; and conversely, procedures that applied to reduce noise also reduce detail. (Denosing usually makes edge blurred) Many techniques have been developed to achieve image enhancement using both linear and nonlinear techniques (such as PDE based denosing techniques) Chapter 5: CS689 4

5.2c: Histogram Operations The histogram of an image is a function that relates the number of pixels in an image to the range of brightness values of those pixels Normally a 2D graph, with abscissa showing the number of pixels and ordinate showing the brightness values Any point on the graph indicates the number of pixels in the image that have the same brightness level It usually has one or more peaks and valleys that corresponds to the gray levels of the image that are most common and least common throughout the image Chapter 5: CS689 5

5.2d: Histogram Illustration Chapter 5: CS689 6

5.2e: Use of Histogram The global statistical manipulation of image gray scale values is based on histogram matching Evaluation of histograms can reveal that some brightness (gray) levels may be underutilized as far as efficient display is concerned Histogram equalization is a response to such an evaluation and refers to spreading out or stretching the gray levels so that they are all used as evenly as possible This manipulation can take the full advantage of the display system, may alter the original data Chapter 5: CS689 7

5.2f: Histogram of A Mosaic Chapter 5: CS689 8

5.2g: Histogram of A Scene Chapter 5: CS689 9

5.2h: Histogram Equalization Histogram flattening or equalization uses some ideal flat histogram shape for histogram matching, which will maximize contrast in the image If the flattening step is used to preserve contrast while moving from a high-resolution gray scale to a lower resolution gray scale, this wasted gray scale will tend to cause loss of detail A slight modification of histogram flattening that effectively limits the maximum slope of the two cumulative functions can be used to transmit maximum information content in the low-resolution image Chapter 5: CS689 10

5.2i: Illustration of Histogram Equalization Original image Original histogram Equalized image Histogram of equalized image Chapter 5: CS689 11

5.3a: Spatial Filtering Spatial filtering involves the replacement of image values at each voxel location with some function of that pixel and its neighbors A linear filter (convolution) uses a weighted sums and is reversible A simple filter is to replace each pixel by the computed mean or average of itself and its eight closest neighbors The size of the neighborhood (kernel of the convolution) may vary for different effect Chapter 5: CS689 12

5.3b: Spatial Filtering (II) The most common goal of pixel averaging is to reduce noise in the image An accompanying result is smoothed or blurred edges in the image Blurring and computation time increase with increase in the size of the neighborhood (kernel) To highlight differences between pixels in an image, each pixel may be replaced by the differences between itself and the mean of its neighborhood. This is unsharp masking for edge enhancement Chapter 5: CS689 13

5.3c: Spatial Filtering (I) Original Kernel size 5x5 Chapter 5: CS689 14

5.3d: Spatial Filtering (II) Kernel size 9x9 Kernel size 15x15 Chapter 5: CS689 15

5.3e: Unsharp Masking Chapter 5: CS689 16

5.3f: Unsharp Masking with Scales Chapter 5: CS689 17

5.3g: Unsharp Masking Illustration Original blurred Mean filter Kernel 3x3 Edge enhanced image Chapter 5: CS689 18

5.4a: Frequency Filtering Many advanced image enhancement techniques are developed in the Fourier domain Fourier s theorem states that any waveform (including the 2D and 3D spatial waveforms that are images) can be expressed as the sum of sinusoidal basis functions at varying frequencies, amplitudes, and relative phases Reducing image noise, enhancing image contrast and edge definition, and other types of operations can be performed on the Fourier transform of an image Chapter 5: CS689 19

5.4b: Advantages of Frequency Filtering Operations in frequency space can often be faster than spatial convolution, especially if the convolution mask (region) is large (speed of filtering) Frequency filtering permits certain operations that are problematic in the spatial domain, such as enhancing or suppressing specific frequencies in the image High frequencies in an image can be suppressed using a low-pass frequency filter. This will suppress noise in the image, but also image detail Chapter 5: CS689 20

5.4c: Low-pass Filter (I) Original image Blurred image with Gaussian noise Chapter 5: CS689 21

5.4d: Low-Pass Filter (II) Low-pass filter with cut-off frequency 0.3 Low-pass filter with cut-off frequency 0.5 Chapter 5: CS689 22

5.4e: High-Pass Filtering High-pass filter enhances the high frequencies in the image, and increases both detail and noise Frequency domain filtering may selectively enhance or suppress periodic patterns in the image or judicious selection of frequency filter functions, which is called band-pass filtering Chapter 5: CS689 23

5.4f: High Frequency Filtering High frequency filter with cut-off at 0.5 Applied to a clone image Image produced with Sobel operator Chapter 5: CS689 24

5.5a: Image Restoration Linear system theory is based on the supposition of linear relationships among all components of an imaging system, a reasonable assumption over the normal range of modern medical imaging systems Linear system can be completely characterized by their response to impulse functions, which are finite amount of electrical energy occurring over zero time The impulse response (point-spread function, or psf) can be used to predict the output of the system to any arbitrary input by the process of convolution, essentially replacing each of the points in an image with its appropriately scaled impulse response Chapter 5: CS689 25

5.5b: Deconvolution The highest frequencies or sharpest details of an image are generally degraded or lost It is possible to use the point-spread function to mathematically deblur or sharpen an image (deconvolution) The process of debluring an image is different from image enhancement Enhancement is to make certain things sharper or more prominent, deconvolution is to restore the image to more exactly represent its original object Chapter 5: CS689 26

5.5c: Methods of Deconvolution Knowing psf is the key for successful deconvolution Psf can be measured empirically, theoretically estimated, or make reasonable estimates of it on the fly from the acquired data Wiener filter minimizes mean-squared-error between the true object and the restoration of the object Iterative nonlinear restoration techniques are usually better Blind deconvolution is used when the measurement of the psf is difficult or tedious Chapter 5: CS689 27

5.5d: Deconvolution (Example) Turbulences on the surface of Jupiter: original and restored Chapter 5: CS689 28

5.6a: Image Segmentation Segmentation is spatial partitioning of an image into its constituent parts, or isolating specific objects in an image Segmentation is often confused with and used interchangeably with classification Image classification means identifying what an object in the image is, or what type of object each pixel belongs to Segmentation: manual, automatic, semiautomatic (assisted manual) Chapter 5: CS689 29

5.6b: Manual Segmentation Manual segmentation involves interactive delineation of the structure boundary in an image by a trained operator This is often the most accurate approach if an expert is doing the work and is not fatigued or hampered by limiting interface devices The drawbacks are time consuming, error prone, subjectively biased, and not reproducible. Multiple operators and images from different scanners increase the variability of the defined borders Chapter 5: CS689 30

5.6c: Manual Segmentation (Example) Chapter 5: CS689 31

5.6d: Draw Contour Chapter 5: CS689 32

5.7a: Thresholding (Semiautomatic Segmentation) A gray scale range is chosen that represents the object of interest. Voxels that are within the gray scale are set to one, all other voxels are set to zero The technique is successful if the specified gray scale is unique to and encompasses the entire object of interest The threshold range can be determined interactively with a side-by-side display of the gray scale and thresholded images Chapter 5: CS689 33

5.7b: Color Image Thresholding Chapter 5: CS689 34

5.7c: Thresholding using Histogram Histogram can be used to select threshold values A value at the minimum between the two peaks of a bimodal histogram is often used as a threshold value It is advisable to blur (smooth) the image before histogram thresholding Visually selecting the threshold values is usually the best, but not reproducible Chapter 5: CS689 35

5.7d: Example of Histogram Thresholding Chapter 5: CS689 36

5.8a: Region Growing An image or volume can be divided into regions based on associated regional characteristics, such as homogeneity of gray scale The most basic form of region growing is based solely on thresholding Connectivity criteria can be combined with thresholding to produce more powerful region growing performance A pixel is connected if it satisfies the thresholding condition and is connected to the seed pixel at a specified number of sides and/or corners Chapter 5: CS689 37

5.8b: Seeded Region Growing Chapter 5: CS689 38

5.8c: Region Growing in a Diffusion Weighted Image Chapter 5: CS689 39

5.9a: Mathematical Morphology Mathematical morphology involves a convolution-like process using various shaped kernels, called structuring elements The structuring elements are mostly symmetric: squares, rectangles, and circles Most popular morphological operations are erode, dilate, open, and close The operations can be applied iteratively in selected order to effect a powerful process Chapter 5: CS689 40

5.9b: Erode Functions Erode function is a reducing operation. It removes noise and other small objects, breaks thin connections between objects, removes an outside layer from larger objects, and increases the size of holes within an object For binary images, any pixel that is set (1) and has a neighbor that is not set (0), is set to 0 The minimum function is the equivalence of an erosion The neighbors considered are defined by the structuring element Chapter 5: CS689 41

5.9c: Illustration of Erosion Function Erosion with a 3x3 square structuring element Chapter 5: CS689 42

5.9c: Example of Erode Function Input image Eroded image Chapter 5: CS689 43

5.9d: Erosion Example Input image Eroded image Chapter 5: CS689 44

5.10a: Dilation Function The dilate function can be thought of as an enlarging function, the reverse of erode For any binary data, any 0 pixel that has a 1 neighbor, where the neighborhood is defined by the structuring element is set to 1 For gray scale data, the dilate is a maximum function The dilation fills small holes and cracks and adds layers to objects in a binary image Chapter 5: CS689 45

5.10b: Example of Dilation Input image Dilated image Chapter 5: CS689 46

5.10c: Dilated Image Input image Dilated image Chapter 5: CS689 47

5.11a: Erosion Dilation Functions Erode and dilate are essentially inverse operations, they are often applied successively to an image volume An erosion followed by a dilation is called an open A morphological open will delete small objects and break thin connections without loss of surface layers A dilation followed by an erosion is called close The close operation fills small holes and cracks in an object and tends to smooth the border of an object Chapter 5: CS689 48

5.11b: Example of Open Operation Input image Opened image Chapter 5: CS689 49

5.11c: Example of Open Operation Input image Opened image Chapter 5: CS689 50

5.11d: Example of Close Operation Input image Closed image Chapter 5: CS689 51

5.11e: Example of Close Operation Input image Closed image Chapter 5: CS689 52

5.11f: An Automated Segmentation (a)original image, (b) thresholding, (c) erosion, (d) dilation, (e) closing, (f) mask rendering, (g) volume rendering Chapter 5: CS689 53

5.12a: Active Contours (Snakes) Segmenting an object in an image with active contours involves minimizing a cost function based on certain properties of the desired object boundary and contrast in the image Smoothness of the boundary curve and local gradients in the image are usually considered Snake algorithms search the region about the current point and iteratively adjust the points of the boundary until an optimal, low cost boundary is found It may get caught in a local minimum (initial guess) Chapter 5: CS689 54

5.12b: Example of A Snake Algorithm Initial contour in green, yellow is intermediate contour Final contour converged in 25 iterations Chapter 5: CS689 55

5.12c: Active Contour with Level-Set Method Chapter 5: CS689 56

5.13a: Image Registration Registration in biomedical image sciences means brining into spatial alignment separately acquired images of the same object When accurately registered, each separate image will have the same coordinate system and a given voxel in one image will represent the same physical volume as the corresponding voxel in another image Interpolation is usually involved in the resampling and/or reformatting process Chapter 5: CS689 57

5.13b: Registration and Fusion Registration is required for multispectral analysis or classification of image features from different images recorded over time and/or fused from different modalities Image fusion means the actual combining into a single image of information from registered multiple images Fused displays can be accomplished using various combinations of color, gray scales, transparency, etc Chapter 5: CS689 58

5.13b*: Example of Registration MRI + SPECT MRI SPECT Chapter 5: CS689 59

3.13b*: Co-Registration Coregistered SPECT-MRI image. The SPECT image was pasted in opaque mode on the top of black-and-white MRI image, which provides an anatomical template Chapter 5: CS689 60

5.13c: Steps in Image Registration Feature extraction Pairing of corresponding features Calculation of transformation parameters Performing the transformation Calculation of transformation parameters is the most challenging part of the process It may involve optimization of a prescribed cost function achieved by iterating the solution Both rigid body and elastic types of transformation are possible Chapter 5: CS689 61

5.13d: Transformations In rigid body transformation, all the points and objects in an image are assumed to move as a whole and do not move relative to each other Translation and rotation are the only motions Affine transformation maps straight lines to straight lines and preserve parallel lines, but angles between these lines can change Projective transformations involve registering different dimensional spaces, e.g., registering 3D images to 2D images Chapter 5: CS689 62

5.13e: Rigid Body Registration Rigid body registration between two individuals Chapter 5: CS689 63

5.14a: Unimodal Registration The reconstruction of 3D structures from microscope images of serial sections requires precise realignment of each sections to its neighbors Unimodal registration is within the same imaging modality Volumetric unimodality registration is required to use medical images quantitatively to study disease progression, monitor patient response to treatment, and evaluate surgical performance for quality control Patient motion artifacts may occur for serial sections Chapter 5: CS689 64

5.14b: Composite Images Un-registered 20 MRI scans of the same slice of an MS patient. Note that the location and orientation differences Chapter 5: CS689 65

5.14c: Composite Images Registered The same images registered. Each of the scans is aligned, and the changes in lesions can easily be tracked over time Chapter 5: CS689 66

5.14d: Intra-Patient Registration Chapter 5: CS689 67

5.15a: Multimodal Registration By combining information contained in multiple images of complementary modalities, synergism is achieved as voxel-by-voxel tissue characteristics can be determined with greater subtlety and precision The relation of structure to function can be revealed by combining structural and functional images Color analysis is a commonly encountered example of multiple band data in which the bands are inherently coregistered Chapter 5: CS689 68

5.15b: Fusion of CT and MRI (Liver) Red: fused MRI and CT. Top-right: MRI, Lower-right: CT The red part is from the CT image, the green part is from the MRI image Chapter 5: CS689 69

5.15c: Fused Image from CT and PET Left: CT image. Middle: PET image. Right: fused image CT is used as background, and PET image as the blue color Chapter 5: CS689 70

Registration Quality and Error Metric Corresponding points: The most straightforward registration error metric is the mean Euclidean distance between corresponding points in both images. It exhibits a global minimum at the point of perfect registration, and to increase monotonically with increased rigid misregistration The corresponding points may be determined by attaching extrinsic fiducial markers rigidly patient bony structures, or by expert identification of intrinsic anatomic landmarks Chapter 5: CS689 71

Landmarks for Registration Four points used for registration. The center of the trachea cross-section on slice A and the centers of the cross-sections of sternum, trachea, and vertebra in slice B Chapter 5: CS689 72

Corresponding Surfaces Corresponding image surfaces are used for intra- and intermodality registration They can be simple isosurfaces or carefully chosen hand-segmented contours The selected surfaces must correspond, and rigid structures are preferable to soft tissues as surface features Surface-based metrics have greater number of coordinate points involved and are complete lack of correspondence information Chapter 5: CS689 73

Example of Surface Registration Original right lung Initial surface registration After 25 iterations Surface is in red Chapter 5: CS689 74

Corresponding Image Features For intramodality registration, the mean squared error between corresponding image voxels has the required minimum at the point of proper registration We can minimize the normalized standard deviation of the gray values of voxels in one image that correspond to voxels of a single gray value in the other The joint entry among images can be minimized, or, the mutual information among the images can be maximized Chapter 5: CS689 75

Corresponding Feature Registration PET image shows tumor, CT image shows location Chapter 5: CS689 76

Search Strategy An optimization method is used to optimize the cost function, once a registration feature (surface, point, voxel) is determined The dimensionality of search space is high (6 DOF for 3D rigid registration, with higher DOF for increasing severity of elasticity) Effective optima search strategies include gradient descent, Powell s method, simplex They involve iterative testing of trial orientations to detect local extrema of the evaluative function Chapter 5: CS689 77

Optima Search Methods Gradient descent methods require the calculation of complex local gradient of error function for search direction and step size Powell s method conducts a bounded golden interval search in one DOF, and iterates to other DOF until convergence Simplex method is a set of (N+1) trial orientations for rapid estimation of the direction of the greatest gradient, without computational burden of gradient computation Chapter 5: CS689 78

Registration of Ultrasound and SPECT Registered image of heart from ultrasound and SPECT images Chapter 5: CS689 79

Confounding Factors Gray scale inhomogeneity can cause the segmented shape of an object surface used for registration to not coincide exactly The drift in voxel size over time may be an important confounding factor for longitudinal MRI studies Accurate intermodality scale factors are often difficult to calculate Scale space searches must be carefully bounded, and their results may be suspect, particularly if volumetric difference measures are made from the registered images Chapter 5: CS689 80

Registration Error Source Target Registered Error before registration Error after registration Chapter 5: CS689 81

Evaluating and Validating Image Registration Translating and rotating a copy of an image volume by known amounts and then registering that copy to the original The best-case accuracy of the algorithm can be assessed as the correct solution is known Additive noise can be added to simulate realworld conditions, gray scale remapping can be applied to simulate intermodal applications Parts of the copy can be erased to simulate partially overlapping scans Chapter 5: CS689 82

Evaluating and Validating Image Registration (Clinical Setting) An expert can grade registration for quality This relates algorithm s performance directly to the currently used gold standard Multiple experts can improve repeatability, but residual error below visual detection may exist Phantom (suitable materials and geometric features) experiments can be used to isolate sources of error Patient images with attached markers provide the most complete validation of rigid registration algorithms applied to real data Chapter 5: CS689 83

6.1a: Multispectral Classification A number of separate measurements are collected for each sample in a multispectral data set Each sample is vector values, such as RGB images, a CT image and an MR image of the same part of a patient s body Individual channels (or bands) may allow different distinctions to be made about each pixel in the image Chapter 5: CS689 84

6.1b: Multispectral Imaging Technique Infrared image Visible Image Multispectral image Chapter 5: CS689 85

6.1c: Multispectral MRI Classification 48 hour old stroke in a two year old. a) T2-W MRI, b) manually classified from T2-weighted MRI, c) Automatically classified MR multispectral image Chapter 5: CS689 86

6.2a: Methods The goal of multispectral classification is to accurately and quickly identify scene objects with a minimum of operator intervention by taking advantage of the additional information available in multispectral images Entire images are analyzed in multispectral space and their pixels labeled as belonging to spectral and information classes in feature space Pixels in the classified image are assigned class numbers and typically color coded to visualize the classification results in the form of a thematic map Chapter 5: CS689 87

6.2b: Multispectral Brain Image Normal MRI of teenage female with sickle cell disease. a) T2-weighted MRI. b) manually classified from T2-weighted MRI. c) automatically classified MR multispectral image Chapter 5: CS689 88

6.2c: Image Space and Preprocessing Bands should be selected to enhance the contrast and segmentability of the features of interest Principal component analysis can be used to transform the multispectral data into a new spectral coordinate system that enhances feature contrasts Pixel data is remapped into a new spectral space such that the most variance is in band 1, next most in band 2, etc. Data from each band must be spatially registered 3D data may require denoise to ensure consistency Chapter 5: CS689 89

6.2d: Unsupervised Classification Unsupervised classifiers require no prior knowledge of feature characteristics but attempt to identify features in multispectral space Pixels associated with the clusters are assigned to classes in feature space Unsupervised classification allows the automatic identification of spectral clusters, may illustrate spectral classes to be used or to be segmented in supervised classifiers The spectral classes can be used to generate multispectral signatures of classes in feature space Chapter 5: CS689 90

6.2e: Supervised Classification Supervised classifiers require training samples taken to be representative pixels of features to be identified in the feature space It depends on expert-defined training samples known to belong to specified feature space classes Pixels are classified and assigned values in feature space on the basis of some criteria for how similar they are to previously identified pixels in the training samples Examples like k-means algorithm Chapter 5: CS689 91

6.2f: Multispectral Space Histogram values of certain range may be taken as belonging to a class in feature space A bimodal histogram may be suggestive of two classes in feature space A scattergram can be used to visualize clusters in multispectral space for two spectral bands The scattergram is not good for visualizing more than two spectral bands Chapter 5: CS689 92

6.2g: Multispectral Visualization Chapter 5: CS689 93

6.3f: Histogram Inference Chapter 5: CS689 94

6.3a: Unsupervised Algorithms Unsupervised classifiers have the potential to be more reproducible as no expert is needed to identify pixels Some parameters may be required, e.g., the number of output classes, criteria for starting a new class, number of iterations, uncertainty of class boundaries, or other stopping criteria Many data mining algorithms can be applied for supervised and unsupervised classifications, may be tailored for medical image classifications The question is how to prepare medical image data for the data mining algorithms Chapter 5: CS689 95

6.3b: The Chain Method It is a basic squared-error type of classifier using the Euclidean distance in feature space as the metric It can be used as an initial classifier to find out class centroids for more advanced methods The first pixel is set to be in a class by itself and becomes the centroid for that class Subsequent pixels are assigned to an existing class if close enough, otherwise a new class is started Class centriods are updated every time a new pixel is added to a class A centroid is a mean value of the pixels in that class Chapter 5: CS689 96

6.3c: The Isodata Method Isodata method is initialized with a set of starting class centroids. Updates are performed as in the Chain method Isodata with merge adds the additional refinement of providing a threshold of change for classes to merge when close enough, or split as they grow Small clusters are either discarded or merged with the closest larger clusters Chapter 5: CS689 97

6.4a: Supervised Algorithms Supervised classifiers require the expert guidance of a set of known classes and their characteristics Training samples may be a set of selected pixels from the input data being classified or a predetermined set of standard spectral signatures for the classes to be identified Supervised classifiers are more popular than unsupervised classifiers Chapter 5: CS689 98

6.4b: Statistical Classifiers The maximum likelihood or Gaussian classifier algorithm calculates the class centers in feature space for the training samples The directions of the principal components of the feature values and the standard deviations along each spectral components are computed A pixel is classified by computing its probability of belonging to each class, and is assigned to the most likely class Accuracy of classification depends on a good estimate of mean vector and covariance matrix for each class in the feature space Chapter 5: CS689 99

6.4c: Neural Network Neural Network classifier builds a standard feedforward neural network and trains it on the class samples The number of input nodes is the number of bands The number of output nodes is the number of defined classes in the training samples The number of hidden nodes is set heuristically Neural network classifies all of feature space by drawing nonlinear boundaries between the classes and train and run quickly with a limited number of classes Chapter 5: CS689 100

6.4d: Parzen Windows Classifier Also known as probabilistic neural network It builds up sophisticated estimates of the underlying probability distribution of each class from the individual training samples It draws near-optimal boundaries that approach Bayes optimal It is more computation intensive, but is often the best for complex distributions of sample points Chapter 5: CS689 101

6.4e: Euclidean Distance Classifiers When training data is limited, it is difficult to get a good estimate of the mean vector Distance information-based classifiers may be better Minimum distance or nearest neighbor classifiers assign unknown pixels to the class of sample that is closest in feature space Think about the k-means algorithm or the k nearest neighbor algorithm Chapter 5: CS689 102

6.4f: Drawbacks of Distance Classifiers Covariance data is not used in minimum distance algorithms, spectral clustering is assumed to spread evenly in the spectral domain Elongated and asymmetric classes having more variation in a particular band are not well modeled with minimum distance methods Input bands should be scaled to the same dynamic range Chapter 5: CS689 103

6.5a: Spatial and Feature Context Spatial context and contextual clues can be used to produce more accurate classification A pixel is more likely to be neighbor to pixels of its own class than to be isolated A single pixel of healthy tissue is unlikely surrounded by malignant tissues Contextual information can be incorporated into a classification algorithm, or applied as a filter after classification Chapter 5: CS689 104

6.5b: Iterative Refinements An estimate of the likelihood for each class is calculated for each class in statistically based classifiers This information can be used during iterative relaxation where initial or previous classification of pixels with low classification confidence is reconsidered Relaxation can be iterated until no more pixels are classified or the number of changed pixels is below a specified threshold Chapter 5: CS689 105

6.6a: Verification of Feature Maps Thematic map verification involves verifying that pixels are correctly labeled Empirical verification involves comparing sample pixels from the thematic map to the reference data An error matrix can be calculated as a simple ratio for each class of the number of pixels correctly and incorrectly classified Multispectral volumetric analysis of brain MRI can show significant deviation from normal values Each clinical problem with multispectral analysis will require its own protocol to select parameters to enhance segmentation of structures of interest Chapter 5: CS689 106

6.7a: Feature Space Applied Classification results are presented in two main ways A thematic map in which pixels are color coded to identify components of the scene A numeric report based on pixel counts for each class Multispectral classification can be a valuable tool for object segmentation, particularly for objects consisting of many small disconnected components Chapter 5: CS689 107

6.7b Classification Applications Segmented objects can be used for objectoriented volume rendering Object volume is easily calculated from pixel counts This can be used to detect and measure tumor size and brain volume Time series studies have measured response of tumor volume to therapy, as well as changes due to aging Chapter 5: CS689 108

6.7c Clinical Applications Detection of malignancies in biospy specimens Measuring tumor response to radiotherapy Detection of multiple sclerosis We need to develop computer based automatic procedures that have high level confidence of accuracy, before they can be applied in clinical situations They have to be tested with many clinical studies and with large population groups before they can be approved for clinical use Chapter 5: CS689 109