The Pennsylvania State University. The Graduate School AFFINITY PROPAGATION FOR COMPUTER-AIDED DETECTION OF LUNG CANCER IN 3D PET/CT STUDIES

Size: px
Start display at page:

Download "The Pennsylvania State University. The Graduate School AFFINITY PROPAGATION FOR COMPUTER-AIDED DETECTION OF LUNG CANCER IN 3D PET/CT STUDIES"

Transcription

1 The Pennsylvania State University The Graduate School AFFINITY PROPAGATION FOR COMPUTER-AIDED DETECTION OF LUNG CANCER IN 3D PET/CT STUDIES A Thesis in Computer Science and Engineering by Trevor K. Kuhlengel c 2017 Trevor K. Kuhlengel Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science December 2017

2 The thesis of Trevor K. Kuhlengel was reviewed and approved by the following: William E. Higgins Distinguished Professor of Electrical Engineering and Computer Science Thesis Advisor Robert T. Collins Associate Professor of Computer Science and Engineering Mahmut T. Kandemir Professor of Computer Science and Engineering Graduate Program Chair Signatures are on file in the Graduate School.

3 Abstract Bronchoscopic biopsy of suspicious sites is a critical component of lung-cancer staging. Physicians use PET/CT studies to provide structural and functional information important to diagnosis, staging, and bronchoscopy procedure planning. In this thesis, we aim to develop an automatic robust and time-efficient method for detection of PET-avid thoracic regions of interest (ROIs) in 3D PET/CT studies. The method presented in this thesis uses a PET histogram of the thoracic cavity to automatically select thresholds to apply to the PET study. We detect PET ROIs using thresholds from affinity propagation clustering on the PET histogram after smoothing using kernel density estimation and exponential smoothing. Detected ROIs are subsequently filtered using false positive reduction techniques based on physical constraints of the PET image volumes and empirically chosen volume and intensity restrictions. The method achieves a 68.3% true detection rate, 40.1% positive predictive value, and an average of 7.6 false positives per case, evaluated over a dataset of 17 cases. The average computation time for the method is 7.1 seconds. iii

4 Table of Contents List of Figures List of Tables vi vii Chapter 1 Introduction Background PET Imaging Existing MIPL PET/CT Tools Problem Statement Literature Review Chapter 2 Methods Method Overview Image Preprocessing Histogram Smoothing Kernel Density Estimation via Diffusion Exponential Histogram Smoothing Threshold Selection Affinity Propagation Thresholding and Detection False Positive Reduction SUV Intensity Restriction Size Restriction Existing PET ROI-Detection Approaches Augmentation of the MVD iv

5 Chapter 3 Results PET/CT Database Parameter Selection Selection Overview Histogram and Smoothing Parameters Affinity Propagation Parameters Experimental Results Affinity Propagation Results Comparison to Existing PET ROI-detection Approaches Summary of Results Computational Cost Discussion Chapter 4 Conclusion 57 Bibliography 59 Appendix A Ground Truth Data 62 A.1 Regions of Interest Appendix B MVD Affinity Propagation PET Segmentation Cookbook 65 B.1 Preparation v

6 List of Figures 1.1 Example PET/CT Study MVD Tool window display Example PET Histogram Process Diagram Example PET and CT volumes Histogram Smoothing ROI Window Changes CAD and AP interfaces MVD AP Tool Parameter Testing Results from Case Effect of λ on convergence Effect of t min on Threshold Count Effect of t min on Sensitivity Multiple thresholds as a contour map Multiple thresholds of an Intensely FDG avid ROI Maximum intensity projection views of case MIP Results from MIP Results from MIP Results from B.1 Cookbook: Load CT & PET Images B.2 Cookbook: Load ROIs and Mask B.3 Cookbook - Set the Thoracic Cavity Mask B.4 Cookbook - Run the AP algorithm B.5 Cookbook - Inspect the results vi

7 List of Tables 3.1 PET/CT Scan database Final Parameter Selection M s effect on computation time Effect of M on Sensitivity and PPV Results for AP without SUV restriction AP Approach 2, SUV > AP Approach 2 threshold data SUV 2.5 Results T 50% Results Summary of Results Summary of Comparison Results Mean Execution Times A.1 Ground-Truth PET/CT ROI database vii

8 Chapter 1 Introduction 1.1 Background X-Ray computed tomography (CT) and Positron Emission Tomography (PET) are the standard image modalities for noninvasive lung cancer screening and staging [1 3]. Integrated PET and CT scanners provide co-registered three-dimensional (3D) multimodal data sets that cover the chest. CT images provide high resolution and contain highly detailed anatomical information. PET images complement the CT with highly specific information for cancer. The use of PET and CT together has become an indispensable tool for comprehensive noninvasive nodule assessment and lymph-node staging [3, 4]. Figure 1.1 illustrates co-registered PET and CT images, and highlights a region of interest (ROI) that is readily visible on both PET and CT. We are interested in identifying PET ROIs from a PET/CT study for use in lung-cancer assessment and bronchoscopic planning. PET/CT image assessment and bronchoscopic procedure planning generally involves two steps: ROI identification and definition followed by route planning [5]. The standard approach for lung cancer assessment requires the physician to manually interact with the PET/CT data

9 2 (a) PET Views (b) CT Views Figure 1.1: An example PET/CT study with an an example ROI shown in both imaging modalities. Images show (a) axial (left) and coronal maximum intensity projection (right) views of the PET image with a PET-avid ROI highlighted, and (b) the corresponding views of the co-registered CT scan. to identify ROIs and develop bronchoscopy procedure plans. When analysis of the PET/CT indicates a high likelihood of cancer, physicians collect a physical sample of suspect tumors for analysis to confirm and exclude other causes [detterbeck 2009]. Locating ROIs inside the complex structure of the lungs using a bronchoscope is skillintensive, and physicians vary greatly in their accuracy [6]. The Penn State Multi-dimensional Image Processing Lab (MIPL) developed the Virtual Navigator System (VNS) to aid in navigation, route planning, and sampling of ROIs in the central chest [7, 8]. The VNS draws upon

10 3 PET/CT to generate virtual models of the chest and airway trees. It also provides tools to facilitate planning optimal paths for image-guided bronchoscopy to ROIs. During a procedure, VNS assists in navigation and allows comparison of real-time images from bronchoscopes to the virtual model. In addition, locations can be confirmed for endobronchial ultrasound using PET/CT-based guidance [9]. The goal of this thesis is to create new PET-focused methods for detecting ROIs depicted in a PET/CT study. We can then use these ROIs for bronchoscopic procedure planning in the VNS. The remainder of this introduction reviews PET imaging, discusses existing MIPL tools for bronchoscopic procedure planning, defines our specific problem, and reviews previously proposed methods for PET-based ROI detection. 1.2 PET Imaging PET is a functional imaging modality that is specific for lung cancer identification and management [2]. Figure 1.1a shows an example of a compact, PET-avid region denoting potential cancer. These features are common in our work with lung cancer assessment. CT imaging provides high resolution, conveys accurate anatomical information, and shows abnormal ROIs in their anatomical context, but is not specific for lung-cancer detection [10]. In contrast, while PET depicts only low-resolution anatomical structure, which outlines the major organs of the body, it can show high-intensity PET-avid regions that can be highly suggestive of cancers. Thus, intense focal PET regions are strong indicators of abnormalities. While PET s specificity is ideal for identification of potential cancers, it has important limitations. First, PET has low resolution relative to CT: Typical PET voxel x-y-z dimensions are 4 mm 4 mm 3 mm, or 48 mm 3 per voxel [11]. Comparable CT technology produces images with voxel resolution equal to 0.75 mm 0.75 mm 0.5 mm, or 0.28 mm 3 per voxel. PET voxel intensities are floating point representations of the standardized uptake value (SUV)

11 4 and typically range from 0 20, but the maximum for each scan varies. As a result, PET s anatomical information alone is not adequate to plan bronchoscopy. In this context, it is clear why PET s specificity and CT s anatomy are highly effective for lung cancer identification and screening when combined. Second, due to its low spatial resolution, PET images are highly susceptible to the partial volume effect (PVE)[nestle 2006]. PVE encompasses two distinct phenomena that cause intensity values in images to differ from ideal measurement values [12]. The first phenomenon is the 3D image blurring resulting from the finite spatial resolution of the imaging system. Spatial resolution of PET is limited by detector design and the reconstruction process, and each detector system has a different point spread function. For example, the image of a small high-intensity source will be a larger, but dimmer, source. This diffusion can be modeled as the convolution of the actual source with the imaging system s point spread function. The second phenomenon results from image sampling on a voxel grid. x Sampling a heterogeneous 3D process using a voxel grid causes many voxels to encompass different types of tissues. A given voxel s intensity is the mean of the intensities from the different tissues comprising the voxel. For intense regions of small size in a cool region, PVE spreads out the signal and lowers the intensity of the true ROI, while slightly increasing the intensity of nearby voxels. As a result, small ROIs are less likely to be identified. Using thresholding methods for PET ROI detection and subsequent segmentation is a natural option due to PET s innate specificity. With thresholding methods, the challenge for each new image is to define what SUV constitutes a high intensity threshold that can be used for detection. We want a practical semi-automatic method to identify potential ROIs quickly without requiring highly interactive tools to do it. In Section 1.3, we discuss our lab s currently implemented tools to identify and segment PET images.

12 5 1.3 Existing MIPL PET/CT Tools Figure 1.2: Screen capture of the MVD tool with case loaded. Left side shows the image slicers with CT, PET, and Fused views for examination of images. Right side is the main window, a ground truth ROI list, and the maximum intensity projection of the thoracic cavity. The MIPL has developed a suite of tools useful for PET/CT-based bronchoscopy procedure planning and guidance, as summarized below. 1. Multimodal Visualization Display (MVD) a program for working with PET and CT images [13]. 2. Multimodal Virtual Navigation System (VNS) a software suite for bronchoscopic procedure planning and guidance [7]. This section reviews these tools. It also points out limitations in these tools for PET/CT-based ROI detection, which motivates this thesis. The MVD system, developed for interaction with PET and CT images, contains tools for analysis and ROI detection in PET/CT studies [13]. The MVD also includes visualization tools, such as 3D slicer views of CT, PET, and fused PET/CT images. Figure 1.2 shows a sample

13 6 system view, which illustrates the ease of viewing and navigating images. Segmented ROIs are also displayed on the slicer views, as seen in the green thoracic mask in the images on the top row of slicer views in the left half of Figure 1.2. Current methods in MVD and the Multimodal VNS to detect and segment PET ROIs include the interactive live wire, region growing, and thresholding techniques [barrett 1997, 5, 13 15]. Live wire and region growing are both interactive methods that can obtain accurate reproducible ROI segmentations from the PET image. However, they require target identification and interaction by the user. Live wire can semi-automatically define a 2D boundary or contour with a few mouse clicks from the user; a 3D version, entailing extra interaction, is also available [13]. Region growing iteratively incorporates pixels into a region starting from a seed point if they meet a similarity threshold. Both live wire and region growing depend on the user identifying and interactively defining these ROIs across multiple slices and are segmentation tools rather than detection methods. Simple thresholding requires a user to select an SUV threshold for a PET image. The threshold is then applied and connected-component labeling subsequently identifies contiguous PET regions above the threshold. However, involving a user to manually enter an SUV as a threshold adds variability. Outputs frequently include many false alarms while missing other important ROIs. For example, one study involving manual threshold selection had a false positive rate of 3.22 false positives per true positive in 93 detected ROIs (equivalently, a 76% false positive rate) [5]. A high false positive rate requires more user time reviewing candidates to remove false positive ROIs. Otsu described an automated threshold selection method that uses information about the image histogram to select a threshold that can optimally divide the image into two intensity groups [5, 16]. The method takes some of the onus of threshold identification off of the user by identifying a value that separates a given bimodal distribution into an upper and lower half [16]. However, PET intensity distributions are often barely bimodal, as shown in a sample PET

14 7 histogram (Figure 1.3), where there is a spike at zero and a normal distribution across the rest of the histogram. As a result of this structure, successive thresholds generated after the first are seemingly random incremental increases to the threshold value. Figure 1.3: ). Sample PET Histogram of SUV values for a whole-body PET image (Case Each of the methods discussed above currently entails excessive user interaction and can produce unpredictable results that necessitate careful checking and substantiation of the outputs. The motivating challenge for this thesis is a desire to develop an automated method for PET ROI detection for use with bronchoscopy planning. Furthermore, we augment the MIPL s existing tools by implementing this new method in the MVD. 1.4 Problem Statement The goal of this thesis is to identify diagnostically relevant regions from the PET image with less interaction than required by currently available methods. To achieve this, we develop an automatic detection method for PET-avid regions in a 3D PET/CT imaging study. We also expand the MVD system to incorporate this new detection method. This method should be time-efficient with a high level of reproducibility, and able to detect ROIs of different intensities.

15 8 We aim to achieve a high detection rate and a corresponding low false positive rate for the candidates detected using our method. This work is focused on PET images in the context of lung cancer detection and bronchoscopy procedure planning. We have two specific aims. Aim 1 is to develop a new semi-automatic detection method for PET/CT case studies. Aim 2 is to integrate this new method into our existing MVD. Finally, we validate the method and compare it to existing approaches an existing 20-case PET ROI database assembled by the MIPL. We use the ground truth data as a reference for determining the true detection rate and positive predictive value for our method. 1.5 Literature Review Many methods have been devised previously for the detection and segmentation of PET ROIs, and the problem is well-described in the literature [nestle 2006, day 2009, teramoto 2015, 17 20]. Among the available detection methods for PET, thresholding is the standard clinical choice for ROI detection. One simple method, identified by nestle 2006, uses thresholding to identify potentially interesting PET regions with an SUV threshold of 2.5 to differentiate between malignant and benign regions as given by the thresholded image I T (x, y, z). I P ET (x, y, z), if I(x, y, z) > 2.5 I T (x, y, z) = 0, otherwise (1.1) where I T (x, y, z) denotes the thresholded version of input PET scan I P ET. Fixed threshold approaches identify intense regions of interest but do not compensate for PVE, which causes an underestimation of SUV values in ROIs with diameters less than 2-3 times the true spatial resolution of the image. Another method is to use a fraction of the given PET scan s maximum image intensity SUV max as the threshold value, which can be normalized between subjects and

16 9 scans. A sampling of maximum-based values suggested in literature include SUV 50%, SUV 42%, and SUV 40%, with many others proposed [day 2009, 17 20]. The previously described Otsu algorithm is a related thresholding approach [16]. The algorithm first takes a threshold guess as an input. A new threshold is calculated using the guess, or previous threshold, to divide the image data into foreground and background and to determine the means of foreground and background intensities separately. Finally, the mean of those two intensity values is taken as the new threshold. This process can be repeated until a convergence criterion is met. More recently, Drever et al. proposed an iterative method for accurate segmentation of PET volumes [21]. Their work focuses on an iterative threshold segmentation procedure to optimize the threshold on an image of a phantom, where the cross-sectional area and volume are known. While functional on phantoms, the method was not tested on real PET data with unknown ROI sizes. So, its usefulness as a viable clinical method is unknown. In the class of threshold-selection algorithms, a newer method described by Foster et al. introduces an unsupervised learning algorithm into the threshold selection process [22]. Using a heavily smoothed version of a PET histogram, it applies the Affinity Propagation (AP) algorithm to distances along the histogram using a unique distance metric [23]. AP is an iterative clustering algorithm similar to K-means clustering [hartigan 1979]. However, unlike K-means, AP does not depend on an initial guess of the number of clusters. Instead it depends only on the inputs and a distance measurement between some portion of the data points. AP is a deterministic algorithm and always returns the same cluster outputs for the same distance inputs, and all intermediate steps will remain the same. Additionally, AP does not depend on knowledge of distance between all data points, so it can be used for graphs or applications with limited knowledge of pairwise distances. The algorithm outputs multiple thresholds that can explore more diffuse disease processes or isolate higher SUV ROIs present in a PET study. These properties make AP a potentially useful clustering algorithm for our purposes.

17 10 We use Foster et al. s AP-based method as the inspiration for our work in this thesis. Our proposed method aims to detect diagnostically relevant PET ROIs with a high detection rate and low false positive rate. In addition, our proposed modified AP method incorporates CT data as a framework for focusing on PET analysis within the thoracic cavity. The remainder of this thesis is organized as follows. Chapter 2 describes our method for semiautomatic detection of PET-avid ROIs. It also illustrates the augmented MVD system tools. Chapter 3 presents experimental results using the MIPL PET/CT database, which demonstrate the usage and efficacy of our proposed method. Chapter 4 offers concluding comments and suggestions for future work. Finally, Appendix A contains extensive information on the PET ROI database, while Appendix B contains a visual cookbook-style example of how to use the augmented MVD tools.

18 Chapter 2 Methods This chapter provides an overview of the proposed semi-automatic Affinity Propagation (AP) based PET ROI detection method, processing steps, other thresholding approaches, and the integration of the described approach into existing tools. First, we provide an overview of the method and the inputs in Section 2.1. Next, we describe each processing step in Sections Then, simple thresholding approaches are introduced in Section 2.6 that we will use to compare the results of our AP-based method. Finally, Section 2.7 describes improvements to the MVD for joint PET/CT ROI definition. 2.1 Method Overview A patient s co-registered 3D CT scan I CT and PET image I P ET serve as the inputs to the APbased PET ROI detection method. I CT and I P ET are whole-body images captured together while a subject breathes freely. I CT is a three-dimensional (3D) X-ray computed tomography image, where I CT consists of N z N x N y voxel sections, as shown in Figure 2.2a. Each voxel in I CT stores the intensity in Houndsfield Units (HU) for the corresponding real-world anatomical location. Similarly, PET image I P ET consists of M z slices of M x M y voxels, where each voxel

19 12 Figure 2.1: Block diagram of proposed PET/CT ROI detection and segmentation method. in I P ET represents the standardized uptake value (SUV) of 18 F-fluorodeoxyglucose (FDG), as shown in Figure 2.2b [20]. I CT and I P ET generally have different voxel dimensions, spacing, and origins. But DICOM image-header information readily enables matching the coordinate systems of the two scans [24]. Since these images share a common world space, we can detect and segment regions in one image and map them into the 3D space of the other. Figure 2.1 shows our proposed high-level image processing pipeline for generating candidate ROIs R j, j = 1,..., N, from inputs I P ET and I CT. First, a 3D thoracic cavity mask R T horax is identified from I CT using a previously devised method for automated thoracic cavity extraction, (Section 2.2) [15]. Next, a histogram H of the PET region delineated by R T horax is generated. H is then smoothed using two different methods: kernel density estimation (KDE) via a diffusion based approach and exponential smoothing (Section 2.3) [25]. From the output smoothed histograms, distances measured along the histogram curves are then used as inputs to the iterative affinity propagation clustering algorithm to identify groups of bins [22, 23]. Each histogram bin is assigned to a cluster group, and these clusters are used to identify thresholds. Thresholds are used to detect and segment the PET image and detect candidate ROIs (Section 2.4). ROIs are then filtered to reduce the number of false positives (Section 2.5). The final set of detected PET-avid ROIs are then output for use in bronchoscopy procedure planning.

20 13 (a) CT Scan (b) PET Scan Figure 2.2: Example 3D PET/CT study represented as stacks of slices. Data taken from case study Image Preprocessing The thoracic cavity R T horax is first computed offline from the CT study using a previously developed method [15]. We later use R T horax to delimit the PET search volume for ROI analysis. In this way, only ROIs pertinent to bronchoscopy are defined. The method combines digital topological operations, morphological operations, active-contour analysis, and key organ landmarks to produce R T horax. R T horax is defined as the volume inside the thoracic wall, rib cage, and diaphragm. In the automated extraction process, the heart s volume is excluded by design. Since the heart stores and uses sugars regularly, many PET images have intensely FDG avid heart ventricle walls when a patient fails to follow the recommended diet. While removing the heart volume from the thoracic cavity is not required, it can reduce the number of false positives for many cases. Application of R T horax to the histogram and detection processes eliminate candidate detections from regions outside the chest and lungs, which are not relevant to bronchoscopy. As a second preprocessing operation, the original low-resolution PET volume I P ET is linearly interpolated to produce new 3D volume I PI having the same dimensions and number of voxels

21 14 as I CT. During later calculations, both I P ET and the interpolated volume I PI are available. We then produce a masked version of the PET image focused on the thoracic cavity via the set intersection I P ET = I PI R T horax. (2.1) The masked PET image I P ET of (2.1) is used in all subsequent operations. From I P ET, a histogram H(i) containing M bins is next generated and normalized such that H(i) = 1, i where i denotes the histogram bin number and H(i) is the voxel count of the histogram s i th bin, i = 1, 2,..., M. All bins are of uniform width and evenly spaced from 0 to the maximum intensity in I P ET. Typical PET maximums are less than an SUV of 20, but there is no limit to the upper bound. The number of histogram bins M is a parameter of the histogram calculation and may change to any positive powers of 2; i.e. M = 2 k, k = 1, 2,.... The final output of preprocessing is a PET histogram H(i) of the PET data encompassed in R T horax ; e.g., Figure 2.3a. In the next section, we describe the two processes used to smooth H(i) prior to use with AP clustering. 2.3 Histogram Smoothing H(i) is often qualitatively noisy, containing many small peaks and valleys. Since the distance metric we use for AP works better on smooth curves, two different smoothing methods are applied to the histogram. In Section we describe kernel density estimation and in Section we describe exponential histogram smoothing.

22 Kernel Density Estimation via Diffusion In order to smooth H(i), we implement a Kernel Density Estimator based on the work by Botev et al., which uses a linear diffusion approach to estimate the Gaussian kernel density [25]. In this method, the objective is to identify the set of Gaussian kernels summed together that best approximate a given histogram H, and use those estimated Gaussians to generate the smoothed histogram Ĥ. A key step in performing kernel density estimation is selecting a bandwidth t to use for selecting underlying Gaussian distributions. If the bandwidth estimate is too large, the values become excessively smoothed and any distinct modes in Ĥ may be lost. If t is too small, each bin gets its own distribution, resulting in no smoothing. Let φ denote the Gaussian PDF (kernel), where φ(x, X i ; t) = 1 2πt e (x X i) 2 /(2t), (2.2) where t is the scale. X i is one of N independent samples χ N {X 1,..., X n } drawn from an unknown continuous probability density function f. The corresponding Gaussian kernel density estimator ˆf is ˆf(x; t) = 1 N N φ(x, X i ; t), x R. (2.3) i=1 In our application, the estimator for Ĥ is defined as Ĥ(x; t) = 1 m m κ(x, H(i); t), x [0, 1], (2.4) i=1 where the kernel κ is given by the sum of two offset Gaussians: κ(x, H(i); t) = φ(x, 2k + H(i); t) + φ(x, 2k H(i); t), x [0, 1] (2.5) k=

23 16 (a) Original Histogram H(i) (b) KDE Smoothed Histogram Ĥ(i) (c) Exponentially smoothed H(i), L = 20. Figure 2.3: Histogram of I P ET from The progression of histograms highlights the effect of smoothing steps for (a) histogram of I P ET, (b) KDE smoothed version Ĥ(i), and (c) the exponentially smoothed H(i) overlaid on top of the first two histograms. All vertical axes are log 10, and horizontal axes use N = 512 bins.

24 17 In (2.2)-(2.5), t is referred to as the scale or bandwidth of the estimator. In Gaussian kernel density estimators, the bandwidth represents the standard deviation of the underlying Gaussian kernels used to estimate the distribution and later proves to be a very important parameter. We also note an important property of (2.3): the number of local maxima (modes) of the output distribution monotonically decreases as t increases. In this thesis, we use the Improved Sheather-Jones algorithm for bandwidth estimation [25]. We replace the fixed-point root-finding algorithm used in Botev s Improved Sheather-Jones with the Brent-Dekker root-finding algorithm, and create a discrete cosine transform (DCT) implementation for the diffusion algorithm [25, 26]. The Improved Sheather-Jones algorithm requires a width input that affects the amount of kernel smoothing resulting from the bandwidth selection. We determine the number of distinct SUV intensity values in the original uninterpolated I P ET and provide it as the width parameter. After the bandwidth is determined, it is applied to the DCT data. The inverse-dct is then applied to obtain the newly smoothed histogram Ĥ. An example histogram output from KDE via diffusion is shown in Figure 2.3b. Compared to the original histogram H in Figure 2.3a, Ĥ is smoother and values more connected. The effect is clearer at higher SUV values on the right side of the figure Exponential Histogram Smoothing The output from the Kernel Density Estimation process is often still inadequately smoothed for the purposes of clustering using AP. To correct this, exponential smoothing with a window is applied to the histogram before using the AP algorithm. Foster et al. s paper is ambiguous on the method and does not cite a specific version [22]. We chose the convolution of Ĥ(i) with an unweighted (uniform-valued) kernel κ(l) of length L: H = Ĥ(i) κ(l), (2.6)

25 18 where H(i) represents the proportion in the i th bin of the exponentially smoothed histogram. Reflective boundary conditions are used so that only the known values of the Ĥ are necessary, and the ends of the histogram are not adversely affected. To disambiguate from the output of the previous step, we refer to the output of this method as H(i). Figure 2.3c shows the output of the exponential smoothing after taking the histogram in Figure 2.3b as an input. This figure also overlays the final result on top of the other two results. In our case, exponential smoothing tends to result in the peak at bin zero being reduced in magnitude and spread across multiple bins of the histogram, visible in the lower SUV values. 2.4 Threshold Selection After histogram smoothing, the AP clustering algorithm can be executed using the normalized histogram H(i) as an input. Section describes the AP clustering algorithm in depth, and is followed by the identification of potential threshold values from the clusters, as discussed in Section The identified thresholds are then applied to the masked PET image I P ET. Preliminary ROI candidates are then detected and segmented for further processing Affinity Propagation Affinity Propagation is an iterative clustering algorithm depending only on the inputs and a distance measurement between some portion of the data points. Unlike K-means clustering, AP does not depend on an initial guess of the number of clusters. AP is also deterministic, since it will always return the same cluster outputs for the same distance inputs, and all intermediate steps will remain the same. Since AP does not depend on knowledge of distance between all data points, it can also be used for graphs or applications with limited knowledge of distances. The fundamental clustering unit in AP is the exemplar: an element of the set that best represents some subset of elements. Any element can be an exemplar. Elements choose an exemplar

26 19 that represents them based on distance, responsibility, and availability of other elements. An outline of the AP algorithm follows. First, distances between points are computed and input to the algorithm as a matrix, which starts the iterative process. Each iteration of AP calculates the responsibility and availability of each point from the distance matrix. These two matrices are then used to identify the best exemplars in each step. Data points are then assigned to one of the available exemplars. The algorithm repeats these steps until a convergence criteria is met. For our convergence criteria, we terminate if the list of best exemplar choices remains the same for t min sequential iterations. We give a more complete discussion of the AP algorithm below. The algorithm depends on three major measures or matrices as mentioned above: distance S ij, responsibility R ik, and availability A ik, where i is the index of the element being considered and j and k are the other elements used to calculate values for element i. S ij denotes the measured distance between elements i and j, and is the primary input for the AP algorithm. For this application, we use a metric that combines the Euclidean distance d E and the geodesic distance d G of the histogram values; i.e., S ij = ( d E ij L + d G ij K ) 0.5, (2.7) where d E ij = d E (i, j) = d G ij = d G (i, j) = ( H(i) 2 H(j) 2 ) 0.5, SUV [i] SUV [j] i j (2.8) j d E (k, k + 1). (2.9) k=i In (2.8), SUV [i] is the SUV value associated with the left edge of the i th bin of H. The geodesic exponent K and Euclidean exponent L can be any scalar values. The algorithm allows for selecting positive or negative preferences for exemplars. These

27 20 input preferences are defined along the diagonal S ii. If no input preferences are provided, the diagonal S ii consists of zeros. Non-zero values indicate a preference, and positive values indicate desired exemplars. The responsibility matrix R ik describes for point i, how well suited point k is to serve as the exemplar for point i, taking into account the accumulated evidence of other possible exemplars for point i. It is defined as the distance from i to k minus the maximum over all other points of the sum of availability and distance from i; i.e., R ik = S ik max {A ij + S ij }. (2.10) j s.t. j k By extension, self-responsibility R ii reflects how well-suited i is to be an exemplar based on the combination of accumulated evidence and input preference, where R ii = S ii max j {Exemplars} j i S ij. (2.11) Self-responsibility takes the input exemplar preference S ii and finds the maximum distance to an exemplar. If the distance is small, the likelihood of i becoming an exemplar increases. Negative self-responsibility indicates that a point is better suited as belonging to another exemplar s grouping, rather than being an exemplar itself. The availability A ik for a point i describes how appropriate it would be for point k to serve as its exemplar, taking support from other points for point k to serve as an exemplar; i.e., { A ik = min 0, R kk + j s.t. j {i,k} } max{0, R jk )}. (2.12) The self-responsibility values R kk play a substantial role in the value of A ik, and the sum of positive responsibilities of point k for all other points. Only the positive values are included,

28 21 because if a point chooses another exemplar to be more responsible for it, that choice does not degrade the value of point k as an exemplar. Self-availability A kk indicates how strong the evidence for point k to serve as an exemplar is, based on accumulated evidence from all other points, where A kk = j s.t. j k } max {0, R jk. (2.13) In other words, all non-negative evidence from R jk is considered when calculating A kk. Exemplars are determined at the end of each iteration. Each element of the histogram is assigned to an exemplar, and exemplars are chosen by the maximization { } E[i] = arg max Ric + A ic, (2.14) c where c is the set of elements with self-responsibility R cc and self-availability A cc greater than zero (R cc +A cc > 0). After this is finished, E[i] contains the index of the exemplar of the i th point in the set. Since it is difficult to determine algorithm convergence by examining two matrices in a meaningful way, convergence is determined by the consistency of exemplar selection for each data point. Once the exemplar choices do not change for a pre-determined number of iterations t min, we assume the algorithm has converged. During the AP algorithm iterations using real data, the values of R ik and A ik can fluctuate across extremes due to the algorithm s complex dependence on all other elements. As a result, if no damping is applied, the series may not converge in some cases. In this work, exponential damping for time series data is used [oppenheim 1975]. The dampening function f(x i ; λ) for a recursive dependent function g(x i ) at iteration i for a given dampening value λ [0, 1] is given by f(x i ; λ) = (1 λ) g(x i ) + λ f(x i 1 ). (2.15)

29 22 This effectively slows the rate of variation, allowing the function to converge in fewer iterations. λ effectively serves as the step size for the algorithm. The AP algorithm outputs an array of integers E[i], one for each bin of H(i). E[i] identifies bin i s exemplar group, which is the index of another bin. With our distance weighting scheme (2.7) on normalized histogram H, we find that the clusters usually form nearly contiguous segments along the horizontal axis. Array E[i] needs to be further processed to select the boundaries of the clusters, from which we can create thresholds and detect ROIs. Processing of E[i] is described further in Section Thresholding and Detection The array of cluster assignments E[i] for each histogram bin serve as the input to threshold selection. However, the cluster assignments in E[i] are sometimes non-contiguous. Common disconnections are edge swaps, where two bins between regions are not connected to their respective region. Hence, further processing is required before threshold extraction, or too many thresholds would result. In this case, a simple check for changes of cluster identifiers with knowledge of the previous and next bin s E[i] is adequate to properly select cluster edges, and, therefore, thresholds for the image. Algorithm 1 describes the method we use for ensuring cluster contiguity. We combine a three-element median filter with overlap test conditions on the ends of cluster groups and track the lowest SUV and highest SUV of each cluster. We then use the list of lowest SUVs to identify SUV thresholds for each cluster grouping. Algorithm 1 effectively reduces the total number of thresholds due to non-contiguous clusters. This results in a set of K SUV intensity values T k that can be used to threshold and segment the image. Next, the thresholds are applied and detected ROIs are segmented. Thresholds are applied from the highest SUV value first, down to the lowest intensity. Each T k is applied to the

30 23 Algorithm 1 Threshold identification algorithm. 1: Inputs: 2: Exemplar list for all bins E[i], with n elements. 3: SUV Intensities for histogram bins S[i], i = 1,..., n 4: unique exemplar group numbers C[j], j = 1,..., m 5: R[j] and L[j] track the right and left edge indexes of cluster j, with size m 6: T [j] is an empty vector of size m. 7: B[1] E[1] 8: for i in {2, 3,..., n 1} do 9: B[i] Median(E[i 1], E[i], E[i + 1]) // Calculate the median of the three values 10: B[n] E[n] 11: for i in {1, 2,..., n} do // Find the indexes of the edges of the clusters 12: for j in {1, 2,..., m} do 13: if B[i] = C[j] then 14: R[j] i // R[j] is right-most element in cluster j 15: if j > 0 and L[j] = 0 then 16: L[j] i 17: for i in {1, 2,..., n} do // Correct any larger transpositions 18: if R[i] = L[i + 1] then 19: mid L[i+1]+R[i] 2 // If large overlap exists, split it at the middle index 20: L[i + 1] mid, R[i] mid 21: for i in 1, 2,..., m do // Collect the SUV values 22: T [i] S [ L[i] ] 23: return T // SUV threshold values masked PET image I P ET to create a binary mask image. Connected component analysis is used to segment contiguous regions in the binary image into i detected regions of interest R i, i 1,..., I. R i denote the initial detections of PET ROIs, and are subjected to further filtering steps described in Section 2.5.

31 False Positive Reduction SUV Intensity Restriction When segmenting, each threshold generates multiple ROIs. Some low SUV intensity thresholds create many false positives. To reduce the number of false positive ROIs, we opt to restrict the minimum SUV threshold value T min to SUV = 2.0. We enforce this restriction by discarding any threshold less than T min prior to application of the thresholds to I P ET. Setting a minimum allowable SUV threshold provides a means to reduce false positives in PET scans exhibiting low maximum intensities while still permitting dynamic threshold choices Size Restriction One feature of PET images is that many segmented regions of the masked PET I P ET will have many very small regions that exceed the intensity threshold, but are not big enough to be diagnostically relevant. Accounting for this, a minimum volume restriction or size threshold V min is applied to candidate ROIs generated from the segmentation step. Removing candidate ROIs smaller than V min significantly reduces false positives in the candidate set. An upper volume limit V max is also used to reduce the number of thresholds that would select the whole volume, or overly large, areas. In detection of early-stage lung cancer, ROI regions will be small relative to the volume of the thoracic cavity. While the largest ROI volumes are likely to be detected due to the multiple threshold results, they are likely to be screened out by a size restriction. However, they are still likely to be detected since higher intensity regions have a 3-dimensional Gaussian distribution, with the highest intensity near the center of the region. This smaller, more intense region will be kept as part of the candidate detections. After size restrictions V min and V max are applied to the initial candidate detection set R i, the remaining ROI detections R j are the final output of our algorithm.

32 Existing PET ROI-Detection Approaches In this section, we describe two other approaches to ROI identification using fixed thresholds on the PET image. These are common and simple methods for selecting PET-avid sites from PET studies. Using a PET SUV threshold of 2.5 has been tested to distinguish malignant from benign tumors in PET/CT studies [nestle 2006]. We define the SUV 2.5 thresholding operation as follows: I P ET (x, y, z), if I P ET (x, y, z) 2.5 I SUV2.5 (x, y, z) = 0, otherwise (2.16) Thresholds relative to the absolute maximum intensity in the PET image SUV max can normalize the thresholds for a given image based on the maximum uptake region in the PET study [17, 18, 20]. For our purposes, we chose T 50% for its simplicity. The operation is defined as SUV max = max {I P ET (x, y, z)} (2.17) x,y,z I P ET (x, y, z), if I P ET (x, y, z) SUV max 0.50 I T50% (x, y, z) = (2.18) 0, otherwise, where I P ET is defined in (2.1). In Chapter 3, we will use SUV 2.5 and T 50% as a basis for comparing detection performance of the AP-based method.

33 Augmentation of the MVD Our AP method and the thresholding methods of Section 2.6 have been integrated into a module of the Multimodal Visual Display (MVD). The MVD was implemented on a Windows 64-bit platform using Visual Studio C Its graphical user interface (GUI) makes extensive use of Qt5 (5.7.0), including the QtCharts library. We make use of two open source image-processing libraries: the Insight Toolkit (ITK) and OpenCV. We use ITK, a multi-dimensional image processing library, for volumetric image processing, histogram generation, and mask application [27]. We use OpenCV for analysis, segmentation, overlap detection, and data storage structures [28]. CMAKE is employed for portability and ease of project management to generate Visual Studio Project and Solution files. Primary inputs to the MVD and AP tool are the co-registered PET/CT scan pair, and a secondary input is the thoracic cavity mask R T horax. A PET/CT scan pair is usually comprised of two 3D images, a PET and a CT. Since MVD automatically linearly interpolates the PET image to match the CT image when loading a PET/CT pair, this allows us to apply the thoracic mask to the interpolated PET image prior to histogram generation. To develop the AP tool, we made changes to the MVD to improve functionality and accessibility. The first of these changes improved the flow of thoracic cavity loading. Originally, the MVD could only load one ROI file at a time, and had no ready way to load a separate mask or other region of interest. As a result, many steps were required to load known ROIs and an image mask. This conflict was resolved by adding another ROI loading table feature to the ROI window. Changes to the ROI loading GUI are illustrated in Figure 2.4. In addition to the improvements to the ROI window, the tool was integrated into MVD s existing Computer-Aided-Detection (CAD) module, as displayed in Figure 2.5. The AP tool performs histogram generation, histogram smoothing, AP clustering, threshold identification, and segmentation. It uses the CAD tool s ROI table to show the list of detected results.

34 27 Figure 2.4: Improvements to the ROI window, which adds the ability to load both an ROI file and a mask file (new feature highlighted by red box). Using the ROI table and MVD s slicer display, these ROIs can be individually examined, removed, or confirmed by the user prior to export for use in the Virtual Navigator System. We store these ROIs in MIPL s *.iroi format. These outputs can then be used to analyze the performance of our method by comparing the ROIs with the ground-truth PET information, also stored in the *.iroi format. This list of ROIs can then be loaded into the ROI tool for further editing and revision, or loaded directly into a VNS case study for planning and navigation. Figure 2.6 shows all the tabs and views of the new AP tool. The ability to adjust parameters is available, and frequently not necessary once an adequate configuration is set. This histogram shown in Figure 2.6c updates after each individual step of the algorithm, and adds new histograms on top of the earlier steps to allow for quick and easy comparison. While the plot defaults to log-base ordinate axis, this axis is easily changed via right-click menu (not shown).

35 28 (a) Figure 2.5: The AP tool was integrated as a sub-feature of MVD s CAD tool. This figure shows both (a) the CAD window and (b) the new AP window s histogram after execution of the AP detection by clicking Execute All. The histogram bar color indicates the AP cluster of the bin, and the ordinate axis is log-base. (b) Two simple comparison methods, SUV 2.5 and T 50%, are included in the Other tab shown in Figure 2.6b. To automatically detect PET ROIs using AP, users must first load the CT and PET studies. Next, for thoracic cavity restriction, the user loads and sets the thoracic cavity mask in the ROI window. They then click the CAD button and the Affinity Propagation button on the CAD window that opens. Then, when the user clicks Execute All, ROIs are detect and results are output to the CAD table view. For further detail, please refer to Appendix B, which gives a detailed discussion of using the system. In Chapter 3 we describe our image testing database, parameter selections, results from the AP-based method, and results from comparison methods.

36 29 (a) (b) (c) Figure 2.6: The new interface implementing Affinity Propagation added to the MVD. (a) allows adjustment of parameters and options, (b) implements T 50% and SUV 2.5, (c) provides a larger view of the window with test logging options and a histogram display.

37 Chapter 3 Results In this chapter, we consider how our AP-based semi-automatic PET ROI detection method works on a PET/CT database and compare its performance with other methods. We begin by briefly describing the database and its contents in Section 3.1. Next, we discuss the parameter selection and tuning phase for our method in Section 3.2. Then, in Section 3.3, we test the optimized method against other well-known methods and evaluate the results on a set of cases. Finally, Section 3.4 provides observations on our results. 3.1 PET/CT Database To test our methods, we draw from a previously constructed PET/CT ground truth (GT) database consisting of co-registered PET/CT scans and expert-defined ROIs [29]. The database was built during an earlier study designed to evaluate the performance of MVD [13]. Images were collected from a Phillips Gemini TF 16 PET/CT scanner. The CT scans consist of pixel slices, with 3 mm inter-slice spacing. Table 3.1 gives CT voxel spacing x and number of slices N z, along with demographics and other image characteristics. The PET scans consist of pixel slices, with 3 mm inter-slice spacing, and axial-plane spacing x = y = 4 mm.

38 31 ROI Intensity No. Case Gender Age Contrast N z x #ROI Mild Moderate Intense F 58 No M 66 No F 54 No M 70 No F 72 No F N/A No F 68 No M 66 No M 67 No F 68 No F 78 No F 51 No M 73 No F 74 No F 56 No M 63 No M 68 No F 75 No F 76 No F 57 Yes Table 3.1: PET/CT Scan database. Case denotes patient ID per IRB protocol number. Gender indicates patient s gender (M: male, F: female). Age gives patient s age. Contrast indicates whether or not the co-registered CT scan was produced using contrast agent. N z denotes the number of axial sections. x (and y) denotes the axial-plane resolution of the whole-body CT scan in mm. Note that the axial-plane resolution of all corresponding whole-body PET scans is 4.0 mm. #ROI denotes the number of PET ROIs identified. ROI Intensity Distribution and sub-columns Mild, Moderate, Intense indicate the number of PET ROIs in each category as classified by the collaborative team. Cases flagged by were used only for parameter selection. The database was created collaboratively with a nuclear-medicine radiologist, a pulmonary physician, and two imaging scientists. Together, they identified and defined all thoracic PETavid ROIs in the PET scans and their CT correlates using the MVD system [29]. This resulted in a 25-case database, from which we chose a 20-case subset. The selected PET/CT studies consist of 81 PET-avid ROIs, of which there were 60 lymph nodes, 15 nodules, and 5 masses. More information about the subjects and specifics on each ground truth ROI can be found in

39 32 Appendix A. For this thesis, we selected database cases to give a distribution of PET ROIs from low to high intensity across multiple different subjects, as summarized in Table 3.1. We chose three of these cases to use only for training/parameter selection (Section 3.2). The remaining 17 cases were used for subsequent testing and analysis (Section 3.3). 3.2 Parameter Selection We perform tests to optimize parameter values for the AP method. The three database cases 98, 108, 116 are used to identify the optimized operating parameters. This then gives an optimized method for the study of Section 3.3. The primary objective of our parameter-selection study is to select a set of parameters that maximize sensitivity and positive predictive value. In particular, we wish to maximize the number of true positives while limiting the number of false positives, per Section A secondary objective is to minimize user time in identifying useful ROIs, while keeping computation time low. To accomplish this objective, we reduce duplication of ROIs whenever possible. This reduces interactive time, since the user must review each candidate to identify diagnostically relevant ROIs. To establish a basis for comparison, we use the following standard statistical measures during testing [30]: True positive (TP): A correctly detected ROI. A candidate ROI is detected, and a portion of that region is defined in the ground truth. We chose an overlap of 1 voxel as the minimum overlap. False positive (FP): An incorrectly detected ROI. A detected candidate ROI that is not present in the ground truth.

40 33 True negative (TN): A correctly non-detected region. A region is not detected, and is also not present in the ground truth set. False negative (FN): An incorrectly non-detected region. A region is present in the ground truth, but is not detected as a candidate ROI. Metrics we use to measure a detection algorithm s efficacy are the sensitivity and positive predictive value (PPV). PPV represents the portion of the candidate ROI set that correctly identify an ROI in the set of ground-truth ROI. PPV can be described as P P V = T P (T P + F P ). (3.1) Sensitivity, also known as the true positive rate (TPR), represents the proportion of the GT ROI set correctly detected by the set of candidate ROIs; i.e. Sensitivity T P R = T P (T P + F N). (3.2) As detailed in Section 2.4.2, the AP detection method first thresholds and detects ROIs from the highest AP threshold T j down to the lowest detection threshold T 1 ; i.e., T k, k = J, J 1,..., 1. As a result, the algorithm may detect a region several times at different SUV thresholds, creating multiple candidate ROIs for the same GT region. To compensate for multiple TPs identifying the same ground-truth ROI, we chose to define the sensitivity as the fraction of the set of ground-truth ROIs that are detected at least once in the candidate set. We refer to these unique detected ground truths as true detections (T D). Sensitivity as the true detection rate (TDR) can then be defined as Sensitivity T DR = T D GT, (3.3) where GT is the number of PET ground-truth ROIs defined in the case. If we use definition

41 34 Parameter Step Parameter Definition Value M Histogram Number of Histogram Bins 256 W exp Exp. Smooth Smoothing Window 20 L AP Geodesic exponent d G L 0.5 K AP Euclidean exponent d E K 2 t min t max AP AP Minimum iterations of identical exemplar selections Maximum number of AP iterations λ AP Lambda V min Vol. Restr. Minimum ROI volume 100 mm 3 V max Vol. Restr. Maximum ROI volume 50 cm 3 Table 3.2: Final operating parameters selected. Parameter is the expression used to identify the parameter, Step identifies the algorithm step that requires it, Definition is an expanded name, and Value gives the final optimized value of the parameter. (3.2), sensitivity would favor parameters that produce the highest number thresholds, which would increase the number of candidate ROIs. Since increasing the quantity of candidate ROIs increases user interaction, we opt to use the TD-based sensitivity definition from (3.3) for all testing. The rest of this section is organized as follows. Section introduces the approach used for testing. Section details the process for parameter selection and the results for parameters used by histogram generation and smoothing. Section describes parameter selection for AP and associated results Selection Overview Table 3.2 summarizes the parameters used in the algorithm, each of which are defined in Chapter 2. The cases used for parameter selection 98, 108, and 116 were chosen for their diversity

42 35 M Hist KDE Exp AP AP iter ROI Total Table 3.3: M s effect on computation time in seconds (s), averaged over cases 98, 108, and 116. M is the number of Histogram bins used. All following columns are time in seconds to perform the indicated calculation. Hist is histogram generation, KDE represents KDE via diffusion, Exp is exponential smoothing. AP and AP iter denote total time for Affinity Propagation clustering and average time for a single iteration of AP, respectively. ROI describes time for segmentation and comparison to the ground truth and Total represents the cumulative time required for all computation steps. of GT ROI sizes, intensities, and quantity of ground truths. Parameters were identified first on case 98, which has a high number of GT ROIs. Parameters selected from the first case were then either confirmed or modified as a result of subsequent tests with the other two cases. During the process of parameter tuning and selection, a single, or at most two, parameters were varied, while all other parameters were held fixed. After a strong candidate was identified, it was fixed and used for the rest of the selection process for that case. New data from subsequent cases were considered and selections were modified accordingly. The following parameters were selected from experimental testing: M, L, K, t min, and λ. We select the other parameters W exp, t max, V min, and V max by reviewing and selecting values defined by Foster et al. or using a previously selected default [22] Histogram and Smoothing Parameters We first studied the impact of varying M, the number of histogram bins. We decided to study M first for two reasons. First, changing M strongly affects the number of clusters. Second, choices of M directly affect the total analysis time, due to constraints imposed by our algorithm. Specifically, the implementation of the discrete cosine transform used by the KDE algorithm

43 36 Sensitivity PPV M bins Mean Mean Table 3.4: Effect of M on Sensitivity and PPV in three test cases from IRB 21405: 98, 108, and 116. M bins is the number of histogram bins. Sensitivity is the true detection rate from cases 98, 108, 116, and the mean over all three. PPV denotes the positive predictive value of cases 98, 108, 116, and the mean over all three. requires the number of input bins to be a power of 2; i.e., M = 2 j, where integer j > 1. As a result, options for the number of histogram bins were limited by the trade-off that higher values of M may result in more or better threshold groupings, but the computation time increased nonlinearly. Since the algorithmic complexity of an iteration of the AP algorithm is O(M 3 ), we chose M to provide a high sensitivity and low AP computation time. Table 3.3 lists the average computation time in seconds for variations of M over the range [128, 1024] for the three test cases, while Table 3.4 lists the sensitivity and PPV for each case, using the optimized parameters from Table 3.2. Table 3.4 suggests sensitivity is maximized at M = 512, while PPV decreases with every increase in M on average. However, 2 out of 3 test cases had their highest PPV at M = 256, and 2 out of 3 had maximum sensitivity there as well. Minimizing the computation time and the number of ROIs generated received stronger consideration in our selection of M = 256 vs. M = 512. Since our secondary objective is to minimize user time and the outputs M = 256 and M = 512 had similar enough results in Table 3.4, we chose M = 256 to reduce the total user time required.

44 37 (a) Sensitivity (b) Positive Predictive Value (c) Number of TP Figure 3.1: Results testing all pairs of Euclidean weight L and geodesic weight K on case Values are shown on a white to red color map to emphasize relative values. Each table highlights values of (a) Sensitivity, (b) Positive predictive value, and (c) Number of true positives generated Affinity Propagation Parameters The exponents L and K used in (2.7) define how the distance along the histogram curve is evaluated. Changing these exponents significantly affects the intensity ranges where AP cluster boundaries result. So, careful selection is required. In order to explore the search space, multiple cases were tested using all combinations of L and K in sets SL = {0.5, 1, 1.5,..., 4.0} and SK = {0.0, 0.5, 1,..., 4.0}. Sensitivity, PPV, number of true positives, and number of false positives were considered for each testing case and each pair of exponents tested. Figure 3.1 shows results tables from the analysis of case using color to show relative values. Each entry in the table represents a result of one run of the AP algorithm using the values of L and K indicated on the edge of the table. Figure 3.1 highlights the importance of careful evaluation during parameter selection. As demonstrated in Figure 3.1, this algorithm results in a tradeoff between sensitivity, positive predictive value, and number of true positives. Maximizing sensitivity (such as with L = 2, K = 1) results in decreased TP count (8), whereas maximizing TPs (such as L = 2, K = 0.5) results in a sensitivity of 70%. While no one pair of exponents provided a consistent maximum sensitivity and PPV, the pair L = 0.5 and K = 2.0 appeared to be a good choice in the cases we tested. This pair provided a consistently high sensitivity and PPV value, while keeping the total number of thresholds Tcount

45 38 Figure 3.2: Effect of λ on number of iterations before convergence and number of resulting AP clusters. Plot showing how selection of λ affects the convergence rate of the algorithm, and the number of clusters. Results shown from parameter testing case NClusters is the number of clusters created by AP on left vertical axis. Iters is the number of iterations required for convergence, limited to 1,000 and mappped on the right vertical axis. reasonable. Dampening factor λ, defined in (2.15), is another important parameter for convergence and number of clusters. If λ is too low, the AP clustering algorithm may not converge due to the oscillation caused by the feed-forward effect of (2.10) and (2.12). Any effect of λ on detection rate was only minimally considered, since its strongest effect is on the number of iterations required to meet the convergence criteria. Figure 3.2 shows the effect of λ on the number of iterations before convergence and the number of output thresholds. It is worth noting that for λ < 0.65, the algorithm did not converge before reaching the iteration limit of 1000 used in this test. Values between 0.72 and

46 were all consistent and converged well. λ = 0.75 was used for the subsequent parameter selection process. λ = 0.8 was identified later as a good default and used for the subsequent tests in Section 3.3. Figure 3.3: Effect of minimum AP repeats t min on number of thresholds generated for the test cases. Figure 3.4: Effect of Minimum AP Repeat choices on sensitivity for the test cases. Minimum number of repeats t min was tested in a similar fashion. Starting from 1 repeat for

47 40 convergence, then going up in reasonable increments {1, 2, 3, 4, 5, 7, 8, 9, 10, 12, 15, 20, 25, 50, 99}, the sensitivity, number of true positives, and clusters identified by the algorithm were used as metrics. Figure 3.3 shows the effect of t min on the number of thresholds generated for the three test cases. Figure 3.4 shows the effect of the number of repeats on the sensitivity of the algorithm. In all test cases, the maximum sensitivity was achieved at or below t min = 10. We use t min as a convergence criterion of the AP clustering algorithm. Increasing the number of identical repeats before termination did increase the number of clusters and thresholds, but did not change the best sensitivity. We chose 10 iterations with identical exemplar choices to be adequate. Increasing t min beyond 10 showed little change in sensitivity or number of clusters. 3.3 Experimental Results We now present the results of testing the optimized AP-based algorithm and compare them to other common thresholding approaches on 17 cases. Section discusses the Affinity Propagation method results and Section highlights results from existing threshold-based detection methods. Next, results from AP and the existing thresholding methods are summarized and compared in Section Finally, computational time is discussed in Section Affinity Propagation Results For the AP-based method, intensely FDG-avid PET ROIs are often identified at multiple different intensity thresholds. When displayed in 2D, these ROIs from different thresholds can be used to create a contour map of the region that shows tiers of intensity. Since this method applies the results to the interpolated version of the PET image, we can use visualizations to highlight the subregions of the different intensities. An example of such a contour map is displayed in Figure 3.5. The fused PET/CT views in the left column of the figure show the entire slice and highlight the ROI. The green isoline represents the ground truth and red lines each represent a

48 41 (a) PET/CT Axial (z = 76) (b) CT Axial (z = 76) (c) PET/CT Coronal (y = 268) (d) CT Coronal (y = 268) (e) PET/CT Sagittal (x = 272) (f ) CT Sagittal (x = 272) Figure 3.5: Multiple thresholds as a PET contour map of one intensely FDG-avid ROI from case The left column presents Fused PET/CT views using the SUV-4.0 hot color map, while the right column presents the CT slice views zoomed-in to an ROI s vicinity. Rows from top to bottom are Axial, Coronal, and Sagittal views. Red lines inside the images represent outlines of the detected ROI at different thresholds. The results from five thresholds are shown; i.e. Tk {2.28, 4.95, 6.96, 9.03, 11.69}.

49 42 (a) Figure 3.6: Multiple thresholds of an intensely FDG-avid ROI on transverse slice 78 of case (a) Fused PET/CT view of the whole axial slice displayed using the SUV-4.0 Hot color map. (b) Focused view of CT mediastinal mass with filled color contour regions overlaid. Each color represents a threshold corresponding to one of five identified PET thresholds T k. (b) different threshold. Figure 3.6 displays a zoomed-in version of the transverse slice, using color to show the intensity differences. In the color version, the CT (Figure 3.6b) shows each PET threshold in a different color overlaid on top of a zoomed-in CT view. The fused PET/CT view (Figure 3.6a) only shows one maximum intensity, since the PET intensity is higher than the SUV-4.0 color map allows. For our AP results, we examine two different approaches for aggregating the results from multiple threshold values. We list them here, and follow with an in-depth discussion about each approach and their results: Approach 1 uses ROIs generated from all thresholds identified by the AP system. Approach 2 includes ROIs only from AP-generated thresholds with SUV values greater than 2.0. Approach 1 utilizes all thresholds generated by the AP method, and results are shown in Table 3.5. The sensitivity of this approach is quite good, detecting 78.3% of the ground truth elements across all cases (range: 0% to 100%). Since all thresholds are considered, more low-

50 43 Case Total ROIs Size Filtered TP FP Sens. (%) PPV (%) Mean % 18.1 % Total Table 3.5: Results of Approach 1, which used no SUV intensity restrictions on the threshold choices. Case identifies the case. Total ROIs counts the total number of ROIs detected by all the thresholds. Size Filtered counts how many ROIs remained after removal of candidates that did not meet the size criterion. TP is true positive count, FP notes false positive count, Sens. % denotes sensitivity (TDR), and PPV lists the positive predictive value. intensity candidate ROIs result, increasing the number of GTs identified. However, the cost of this sensitivity rate is an increase number of false detections, since the low-intensity thresholds detect more FP candidates. A high FP count of 29.4 FP per case results in a low overall PPV of 18.1% (range: 0% to 100%). Many of these candidates are very small, so the 100 mm 3 minimum volume restriction reduces the number of candidates significantly. Using the minimum volume restriction (Section 2.5), we discard 83.1% of the ROI detections, leaving 574 candidates across 17 cases. We note that three of the cases have no true positives detected. Since this happens in several detection methods, they are discussed in Section 3.4. Approach 2 aims to reduce the number of FPs generated by only considering thresholds with

51 44 Case ID GT ROIs Total ROIs Size Filtered TP FP Sens. (%) PPV (%) Mean % 40.1% Total Table 3.6: Results for Approach 2: all ROIs generated by thresholds with SUV> 2.0. Case ID identifies the case. GT ROIs counts the GT ROIs for that case in the database. Total counts the total number of ROIs generated by all the thresholds. Size Filtered counts how many ROIs remained after removal of candidates that did not meet the size criterion. TP is true positive count, FP notes false positive count, Sens. (%) denotes sensitivity (TDR) (range: 0% to 100%), and PPV (%) lists the positive predictive value (range: 0% to 100%). SUV values greater than 2.0, since few low intensity ROIs are defined in the ground truth. Some low intensity PET candidates are not related to tumors or lymph nodes, and are more likely to include many false positives. Low enough intensities (without volume restrictions) will also capture the entire volume under testing, creating an ROI with 100% sensitivity and 100% PPV. This potential result is the reason we introduced the upper volume restriction. Tables 3.6 and 3.7 list results of Approach 2 tests. Table 3.6 shares the same format as Table 3.5, while Table 3.7 lists additional information specific to Approach 2, including data about the thresholds, maximum PET intensities of the thoracic cavities, and number of GTs detected.

52 45 Case ID GT ROIs T n I P ET SUV max T max T min GT Detected Sens. (%) Total Mean % Table 3.7: Approach 2 with information about thresholds from each case, detected using thresholds with SUV> 2.0. Case ID identifies the case number from IRB GT ROIs indicates the number of ground truth ROIs defined for that case. T n is number of thresholds generated by AP. I P ET SUV max is the maximum PET intensity value in the thoracic cavity. T max identifies maximum threshold generated. T min identifies the minimum threshold kept after SUV restriction. GT Detected is number of GT identified and Sens. (%) is the sensitivity (TDR) of the detection algorithm.

53 Comparison to Existing PET ROI-detection Approaches Other methods that were tested for comparison with the AP method include SUV 2.5 and T 50%, as discussed in Section 2.6. In SUV 2.5, a single invariant threshold SUV value of 2.5 is used to detect potentially malignant ROIs (2.16). T 50% relies upon the maximum intensity SUV max in R T horax for calculation as defined in (2.18). All comparison methods utilize the thoracic cavity mask to restrict histograms, intensities, and results to the lungs and mediastinum. Only the minimum ROI volume restriction is used. The maximum volume restriction is not required, since SUV 2.5 and T 50% thresholds are typically Case ID Total ROIs Size Filtered TP FP Sens. (%) PPV (%) Mean % 24.8% Total Table 3.8: SUV 2.5 results: Results from thresholding at a fixed SUV value of 2.5 across all cases. Case ID identifies the case. Total ROIs indicates the total number of candidate regions of interest found. Filtered shows number of candidates ROIs remaining after removal of candidates with volumes less than 100 mm 3 and greater than 50 cm 3. Columns 4-5 are number of true positives (TP) and false positives (FP) when compared to ground truth. Columns 6-7 identify the sensitivity and positive predictive value in percentage points.

54 47 (a) Full Body PET MIP (b) MIP with R T horax applied (c) MIP SUV 2.5 (d) MIP T 50% Figure 3.7: Threshold results shown on a maximum intensity projection (MIP) of the PET image from case (a) MIP of the full body PET, (b) PET with Thoracic Cavity Mask R T horax applied and GT highlighted in green, (c) SUV 2.5 threshold result, and (d) application of T 50%, using SUV threshold of 7.47.

55 48 (c) MIP with R T horax applied and GT (d) MIP T 50% (a) Full Body PET MIP (b) MIP SUV 2.5 (e) MIP AP #2 Figure 3.8: Threshold results shown on a maximum intensity projection (MIP) of the PET study from case (a) MIP of the full body PET, (b) SUV 2.5 threshold result, (c) PET with Thoracic Cavity Mask R T horax applied and the ground truth shown in green, (d) application of T 50%, using SUV threshold of 2.98, and (e) AP with intensity restriction and a minimum threshold of SUV 2.0. AP #2 has thresholds T k {3.00, 4.05, 5.49}

56 49 (c) MIP with R T horax applied and GT (a) Full Body PET MIP (d) MIP T 50% (b) MIP SUV 2.5 (e) MIP AP #2 Figure 3.9: Threshold results shown on a maximum intensity projection (MIP) of the PET study from case (a) MIP of the full body PET, (b) SUV 2.5 threshold result, (c) PET with Thoracic Cavity Mask R T horax applied and the ground truth shown in green, (d) application of T 50%, using SUV threshold of 1.35, and (e) AP with intensity restriction and a minimum threshold of SUV 2.0. AP #2 has threshold T k = 2.40

57 50 (c) MIP with R T horax applied and GT (a) Full Body PET MIP (d) MIP T 50% (b) MIP SUV 2.5 (e) MIP AP #2 Figure 3.10: Threshold results shown on a maximum intensity projection (MIP) of the PET study from case (a) MIP of the full body PET, (b) SUV 2.5 threshold result, (c) PET with Thoracic Cavity Mask R T horax applied and the ground truth shown in green, (d) application of T 50%, using SUV threshold of 3.94, and (e) AP with a minimum threshold of SUV 2.0. AP #2 has thresholds T k {3.64, 5.27, 6.69}.

58 51 Case ID Threshold Value Total ROIs Size Filtered TP FP Sens. (%) PPV (%) Mean % 21.9% Total Table 3.9: T 50% results, using a threshold of 50% of the maximum SUV intensity in the image. Case ID identifies the case. Threshold Value is the T 50% SUV value used for detection. Total ROIs indicates the total number of separate regions of interest found. Size Filtered lists number of candidate ROIs remaining after removal of candidates with volumes less than 100 mm 3 and greater than 50 cm 3. TP is number of true positives and FP is number of false positive detections. Sens. (%) identifies the sensitivity (TDR) and PPV (%) the positive predictive value.

59 52 Method Sens. PPV TP/Case FP/Case SUV % 24.8% T 50% 46.7% 21.9% AP#1 78.3% 18.1% AP#2 68.3% 40.1% Table 3.10: Summary of sensitivity, positive predictive values, TP per case and FP per case of all methods tested across 17 tested cases. Sens. and PPV represents one method s sensitivity (TDR) and positive predictive value, respectively. TP/Case and FP/Case indicate the mean number of TP and FP detected per case. Rows are as follows: AP #1 : Approach 1, proposed method using no PET intensity restriction. AP #2 : Approach 2, proposed method with PET intensity restriction. SUV 2.5 : Results for SUV 2.5. T 50 : Results for T 50% tool only uses the threshold for generation of the MIP, so these extra regions are still visible. Table 3.8 shows the results of SUV 2.5 analysis of each case, along with overall results using the same criterion as AP. Across all cases, the net sensitivity of SUV 2.5 is 76.7%, and the net PPV is 24.8%. Table 3.9 lists the results of using T 50% as a detection approach. Overall, T 50% identified 46.7% of the ground truth for the tested cases and provided a 21.9% PPV, with 6.2 FP/case Summary of Results In this section, we compare and contrast the results from all tested methods. Table 3.10 provides a condensed summary of results, listing only aggregate values of the sensitivity, PPV, TP/case, and FP/case. Table 3.11 lists a complete comparison of the sensitivity and PPV of each method with results from all cases. Overall, approach 1 (AP without intensity constraints) had the best sensitivity with 78.3%, followed by SUV 2.5 with 76.7%, AP approach 2 with 68.3%, and T 50% with 46.7% sensitivity. However, when considering the number of true positives and positive predictive value, AP approach 2 (AP with intensity restriction) is the best with a PPV of 40.1%. The next best approach was SUV 2.5 which had a 24.8% PPV. Of the selected methods, AP detection approach 1 and SUV 2.5 correctly detected the most

60 53 AP #1 AP #2 SUV 2.5 T 50 Case ID Sens. PPV Sens. PPV Sens. PPV Sens. PPV Mean 78.3% 18.1% 68.3% 40.1% 76.7% 24.8% 58.3% 21.9% Table 3.11: Comparison of the sensitivity and positive predictive values of all methods tested. Case ID identifies the case. Each subsequent pair of columns Sens. and PPV represents one method s sensitivity (TDR) and positive predictive value, respectively. Ranges for Sens. and PPV are both 0-100% for all methods. Columns are as follows: AP #1 : Approach 1, proposed method using no PET intensity restriction. AP #2 : Approach 2, proposed method with PET intensity restriction. SUV 2.5 : Results for SUV 2.5. T 50 : Results for T 50% ROIs, and AP approach 2 had the best PPV ratio over all detections Computational Cost We compare the speed of our implementation of the AP-based method to the results described by Foster et al. [22]. Foster et al. s implementation took 0.66 seconds per two-dimensional slice and 1 minute for a complete 3D PET volume computation. While we do not test single slices in this thesis, we do test full-body PET/CT studies. Using the optimized parameters in Table 3.2, average times for each step across three cases are listed in Table We used the three parameter test cases (Section 3.2), which had mean number of PET/CT slices N z = 253

61 54 Operation Mean Execution Time Range Histogram Generation 1,110 ms [930 1,240] KDE via Diffusion 3.28 ms [ ] Exponential Smoothing 2.72 ms [ ] Affinity Propagation 500 ms [ ] ROI segmentation and analysis 3,840 ms [2,560 4,780] Average AP iterations 50 [49 51] Mean Total Time 7.10 sec. [ ] Table 3.12: Average execution time for each step of the AP algorithm using cases 98, 108, 116. (range: ). Using an Intel workstation with 24-GB memory running 12 cores at 2.8 GHz, the AP algorithm takes an average of 7.1 seconds to detect and segment a whole-body PET study given R T horax using 256 histogram bins (range: seconds). Our AP clustering step requires between seconds to identify clusters and the average time per AP iteration is 10.1 milliseconds. With these parameters, the steps that took the most time were histogram generation and ROI segmentation and analysis. Since we chose a modest value of M, we minimize the time that was spent in affinity propagation, making the algorithm fast and effective. Keen readers will notice a difference in the sum of line items in Table 3.12 and the total time of about 1.64 seconds. We account for this as interface overhead; e.g. listing the detected ROIs on the results table in the CAD interface. In order to require a comparable amount of time to what was described by Foster et al. [22], we would need to use a histogram with 1024 bins, as described in Table 3.3. Since our objectives included minimizing interactive time, M = 1024 did not add significant enough detection value to justify the time required.

62 Discussion In this thesis, we built an augmented implementation of the AP method described by Foster et al. [22]. Foster et al. focused on PET segmentation and quantification of disease progression in a longitudinal study of tuberculosis in rabbits. We apply our method to detect cancer in human PET/CT studies and use the 3D CT to constrain the detection to the thoracic cavity. Resulting from this domain change, our optimized parameters differ to match our needs. In addition, we add false positive reduction procedures to the method. This total system approach arrives at segmented ROIs. In our work, we use a consistent set of human PET/CT case studies across each method tested. This allows for fair and even comparison across many different ROIs. However, a small number of low-intensity ground truth ROIs are not detected by any method. Given the relatively low resolution of the PET images, any smaller or lower intensity SUVs were more likely to be missed, due to the effects of PVE and the minimum volume restriction we enforced. One case, , had no GT ROIs correctly detected by any method we implemented. Close examination of the ROI (Table A.1) suggests this outcome results from the low max (2.43) and mean (2.07) SUV and modest size of the ROI (1.9 cm 3 ). Several other ROIs were only detected by T 50%, such as those for cases and , each of which had only one low intensity ROI. T 50% performs out-performs the other methods in detecting ROIs from cases with low SUV max values. However, this algorithm tends to detect many FPs as well in those cases, due to its relative maximum intensity. Applying T 50% to studies with no diagnostically relevant ROIs produce many low-intensity FP candidate ROIs for the user to review. Case performed better without application of the V max restriction because one GT ROI (ROI #69 in Table A.1) is over 100 cm 3. Final results for this case study did not use the maximum volume restriction for approaches 1 or 2.

63 56 We omitted another potential aggregation model from Section due to its limitations for effective endpoints. Instead, we will briefly address it here. Another approach for applying the outputs of this algorithm is for a user to select a threshold level that best suits their needs. This approach allows users to choose a single SUV value from the thresholds generated by our AP method and subsequently use only those ROI candidates for bronchoscopy planning. As a result of only having one threshold to define the candidates, we can then use the simple computation of TPR (3.2) for sensitivity. We used this aggregation model in early testing of the AP-based method and preliminary exploration of the parameter space due to its simplicity. The benefit of this third approach is that it provides a way to obtain a single set of ROIs with minimal user input. By selecting the best threshold from the results, we reduce the total number of candidates that require user review. However, this model does not allow for a truly repeatable and automatic version with quantitative outputs independent of the user. Therefore, we do not include it in our AP testing for this thesis.

64 Chapter 4 Conclusion In this thesis, we developed a new semi-automated method to detect lung cancer in PET/CT studies. Our work was motivated by the desire to rapidly detect ROIs for bronchoscopic procedure planning, minimize the user time, and minimize the number of false positives. We conveyed the method s importance to bronchoscopic procedure planning, described our current approaches to detection, surveyed the literature for suitable new methods, and chose an automated APbased segmentation algorithm to adapt to our needs. In the Methods section, we describe our AP-based threshold selection algorithm. Applying the 3D CT to focus on the thoracic cavity, we create a total system approach to ROI detection. We detailed how the AP detection method functions within the MIPL. The method was implemented in C++ and integrated into the Multi-modal Visualization Display system. We tested the algorithm on a 20-case database of PET/CT studies and compare the results of the algorithm to existing methods in the literature. Using two approaches to ROI result aggregation, we successfully detected 78% of the ROIs using the first approach. The second approach restricted the range of SUV thresholds, and detected 68% of the ROIs. However, its detection accuracy is much higher than other methods tested (40.1% of detections were true positives, 7.6 FP/case). Our algorithm is also fast, requiring only 7.1 seconds to detect and

65 58 segment ROIs from a case study. By identifying the segmented ROIs by their threshold, we allow the user to select a threshold value best suited to their needs. Our work makes several contributions to the field, building on the work previously developed by the MIPL: Novel application of an (semi-)automated PET segmentation method to cancer detection. Created a fast, reliable total system approach to ROI detection in the thoracic cavity. Extended and augmented the MVD system by adding the automated AP method to it. Optimized the parameters of the AP method for lung cancer detection. Future work can expand on several aspects of this methodology. Improved algorithms could integrate information from both the PET and CT to define lung masses and lymph nodes, instead of only relying on one or the other. Additionally, this method could be adapted for PET ROI detection in other regions, such as abdomen, chest, and neck.

66 Bibliography [1] N. C. Dalrymple, S. R. Prasad, M. W. Freckleton, and K. N. Chintapalli, Informatics in radiology (inforad): Introduction to the language of three-dimensional imaging with multidetector CT, Radiographics, vol. 25, no. 5, pp , Sep [2] S. Kligerman, The clinical staging of lung cancer through imaging: A radiologist s guide to the revised staging system and rationale for the changes, Radiol. Clin. N. Am., vol. 52, no. 1, pp , Jan [3] T. M. Blodgett, C. C. Meltzer, and D. W. Townsend, PET/CT: Form and function, Radiology, vol. 242, no. 2, pp , Feb [4] D. J. A. Margolis, J. M. Hoffman, R. J. Herfkens, R. B. Jeffrey, A. Quon, and S. S. Gambhir, Molecular imaging techniques in body imaging, Radiology, vol. 245, no. 2, pp , Nov [5] R. Cheirsilp and W. E. Higgins, Multimodal 3D PET/CT System for Bronchoscopic Procedure Planning, in SPIE Medical Imaging 2013: Computer-Aided Diagnosis, C. L. Novak and S. Aylward, Eds., vol. 8670, Feb. 2013, pp X X 14. [6] R. Khare, R. Bascom, and W. E. Higgins, Hands-Free System for Bronchoscopy Planning and Guidance, IEEE Transactions on Biomedical Engineering, vol. 62, no. 12, pp , Dec [7] J. D. Gibbs, M. W. Graham, R. Bascom, D. C. Cornish, R. Khare, and W. E. Higgins, Optimal procedure planning and guidance system for peripheral bronchoscopy, IEEE Transactions on Biomedical Engineering, vol. 61, no. 3, pp , [8] S. A. Merritt, R. Khare, R. Bascom, and W. E. Higgins, Interactive CT-Video Registration for Image-Guided Bronchoscopy, IEEE Trans. Medical Imaging, vol. 32, no. 8, pp , Aug [9] W. E. Higgins, X. Zang, R. Cheirsilp, P. Byrnes, T. Kuhlengel, R. Bascom, and J. Toth, Image-guided endobronchial ultrasound, in Proc. SPIE Medical Imaging 2016: Image- Guided Procedures, Robotic Interventions, and Modeling, ser. American Thoracic Society International Conference Abstracts, vol. 9786, San Diego, California, Mar. 18, 2016, 97862G G 7.

67 60 [10] G. K. von Schulthess, H. C. Steinert, and T. F. Hany, Integrated PET/CT: Current Applications and Future Directions, Radiology, vol. 238, no. 2, pp , Feb. 1, [11] S. Surti, A. Kuhn, M. E. Werner, A. E. Perkins, J. Kolthammer, and J. S. Karp, Performance of Philips Gemini TF PET/CT scanner with special consideration for its timeof-flight imaging capabilities, Journal of Nuclear Medicine, vol. 48, no. 3, pp , [12] M. Soret, S. L. Bacharach, and I. Buvat, Partial-Volume Effect in PET Tumor Imaging, Journal of Nuclear Medicine, vol. 48, no. 6, pp , Jan. 6, [13] R. Cheirsilp, Multimodal Visualization Display for 3D PET/CT Image Analysis, Masters, The Pennsylvania State University, State College, PA, Jul [14] W. E. Higgins, R. Cheirsilp, R. Bascom, T. Allen, A. E. Dimmock, and T. Kuhlengel, Automated 3D PET/CT-based detection of suspect central-chest lesions, in Annals Am. Thorac. Soc., vol. 185, San Francisco, CA, May 18, 2012, A4431. [15] R. Cheirsilp, R. Bascom, T. W. Allen, and W. E. Higgins, Thoracic Cavity Definition for 3D PET/CT Analysis and Visualization, Computers in Biology and Medicine, vol. 62, pp , Jul. 1, [16] N. Otsu, A threshold selection method from gray-level histograms, Automatica, vol. 11, pp , [17] E. Deniaud-Alexandre, E. Touboul, D. Lerouge, D. Grahek, J.-N. Foulquier, Y. Petegnief, B. Grès, H. E. Balaa, K. Keraudy, K. Kerrou, F. Montravers, B. Milleron, B. Lebeau, and J.-N. Talbot, Impact of computed tomography and 18F-deoxyglucose coincidence detection emission tomography image fusion for optimization of conformal radiotherapy in non small-cell lung cancer, International Journal of Radiation Oncology, Biology, Physics, vol. 63, no. 5, pp , Dec. 1, [18] A. C. Paulino, M. Koshy, R. Howell, D. Schuster, and L. W. Davis, Comparison of CTand FDG-PET-defined gross tumor volume in intensity-modulated radiotherapy for headand-neck cancer, International Journal of Radiation Oncology, Biology, Physics, vol. 61, no. 5, pp , Apr. 1, [19] R. Hong, J. Halama, D. Bova, A. Sethi, and B. Emami, Correlation of PET standard uptake value and CT window-level thresholds for target delineation in CT-based radiation treatment planning, International Journal of Radiation Oncology, Biology, Physics, vol. 67, no. 3, pp , Mar. 1, [20] B. Foster, U. Bagci, A. Mansoor, Z. Xu, and D. J. Mollura, A review on segmentation of positron emission tomography images, Computers in Biology and Medicine, vol. 50, pp , [21] L. Drever, W. Roa, A. McEwan, and D. Robinson, Iterative threshold segmentation for PET target volume delineation, Medical Physics, vol. 34, no. 4, pp , Apr. 1, 2007.

68 61 [22] B. Foster, U. Bagci, Z. Xu, B. Dey, B. Luna, W. Bishai, S. Jain, and D. J. Mollura, Segmentation of PET Images for Computer-Aided Functional Quantification of Tuberculosis in Small Animal Models, IEEE Transactions on Biomedical Engineering, vol. 61, no. 3, pp , Mar [23] B. J. Frey and D. Dueck, Clustering by Passing Messages Between Data Points, Science, vol. 315, no. 5814, pp , Feb. 16, [24] P. Mildenberger, M. Eichelberg, and E. Martin, Introduction to the DICOM standard, European Radiology, vol. 12, no. 4, pp , Apr. 1, [25] Z. I. Botev, J. F. Grotowski, and D. P. Kroese, Kernel density estimation via diffusion, The Annals of Statistics, vol. 38, no. 5, pp , Oct [26] R. P. Brent, An algorithm with guaranteed convergence for finding a zero of a function, The Computer Journal, vol. 14, no. 4, pp , [27] T. S. Yoo, M. J. Ackerman, W. E. Lorensen, W. Schroeder, V. Chalana, S. Aylward, D. Metaxas, and R. Whitaker, Engineering and algorithm design for an image processing Api: A technical report on ITK the Insight Toolkit, in Digital Upgrades: Applying Moore s Law to Health, ser. Studies in Health Technology and Informatics, vol. 85, IOS Press, Jan. 2002, pp [28] G. Bradski, Dr. Dobb s Journal of Software Tools, [29] R. Cheirsilp, 3D Multimodal Image Analysis for Lung-Cancer Assessment, PhD thesis, The Pennsylvania State University, School of Electrical Engineering and Computer Science, [30] T. Fawcett, An introduction to ROC analysis, Pattern Recognition Letters, ROC Analysis in Pattern Recognition, vol. 27, no. 8, pp , Jun

69 Appendix A Ground Truth Data A.1 Regions of Interest Table A.1: Ground-Truth PET/CT ROI database for thesis, containing 20 cases and 81 ROI. The columns Vol PET and Vol CT denote volume of PET and CT ROIs, respectively. The columns Minor PET and Minor CT denote minor-axis length of PET and CT ROIs, respectively. The PET ROIs with no CT correlate have both Vol CT and Minor CT =. Units for volume and Minor-axis length are cc and cm, respectively. Note that AP = aortopulmonary, while Stations correspond to the IASLC lymph-node map. No. Case ROI SUV mean SUV max Vol PET Minor PET Vol CT Minor CT Description P FDG Intense left upper paratracheal lymph node P FDG Intense right upper paratracheal lymph node P FDG Intense precarinal lymph node P FDG Intense AP window lymph node P FDG Intense left hilar lymph nodes P FDG Intense right hilar lymph nodes P FDG Intense subcarinal lymph nodes P FDG Intense left hilar lymph node P FDG Intense right hilar lymph nodes P FDG Intense left hilar lymph node P FDG Intense paraesophageal lymph node P FDG Moderate-Intense mediastinal lymph node (Station 7) P FDG Moderate right lower lobe sulcus nodule P FDG Intense subcarinal lymph node P FDG Intense left hilar lymph node P FDG Intense right hilar lymph node P FDG Intense left hilar lymph node P FDG Intense pretracheal lymph node Continued on next page

70 Table A.1 Continued from previous page No. Case ROI SUV mean SUV max Vol PET Minor PET Vol CT Minor CT Description P FDG Intense prevascular lymph nodes P FDG Intense PA window lymph node P FDG Moderate right prevascular lymph node P FDG Mild paraesophageal lymph node P FDG Intense precarinal lymph nodes P FDG Intense left lung fissural nodule P FDG Moderate right precarinal lymph nodes P FDG Intense right hilar lymph nodes P FDG Intense right subcarinal lymph node P FDG Moderate left inferior hilar lymph node P FDG Mild mediastinal lymph nodes P FDG Mild left hilar lymph node P FDG Mild left hilar lymph node P FDG Mild left hilar lymph node P FDG Minimal left upper lobe nodule P FDG Minimal right lower lobe nodule P FDG Intense right lower lobe mass P FDG Minimal lymph node (Station 7) P FDG Intense right upper lobe nodule P FDG Intense left upper lobe mass P FDG Mild lymph node (Station 11L) P FDG Moderate subcarinal lymph node P FDG Minimal right hilar lymph node P FDG Minimal AP window lymph node P FDG Minimal lymph node P FDG Minimal left hilar lymph node P FDG Mild lymph node P FDG Moderate right hilar lymph node P FDG Moderate lymph node (Station 7) P FDG Intense paratracheal lymph node P FDG Moderate-Intense right lower lobe nodule P FDG Moderate AP window lymph node P FDG Mild-Moderate precarinal lymph node P FDG Mild-Moderate AP window lymph node P FDG Mild-Moderate right subcarinal lymph node P FDG Mild nodule, slightly mis-registered P FDG Moderate lymph node P FDG Mild subcarinal lymph node P FDG Mild subcarinal lymph node P FDG Intense left upper lobe mass P FDG Mild lymph node (Station 4R) Continued on next page 63

71 Table A.1 Continued from previous page No. Case ROI SUV mean SUV max Vol PET Minor PET Vol CT Minor CT Description P FDG Intense AP window nodal mass P FDG Intense paraesophageal lymph node P FDG Moderate lymph node (Station 7) P FDG Moderate right hilar lymph node (Station 10-11R) P FDG Moderate-Intense lymph node (Station 4R) P FDG Intense mediastinal pleural-based nodule P FDG Intense lymph node (Station 2R) P FDG Moderate-Intense lymph nodes (Station 2R) P FDG Moderate lymph node (Station 2R) P FDG Intense interlobular pleural mass P FDG Intense lymph node P FDG Intense anterior mediastinal lymph nodes P FDG Moderate left upper hilar lymph node P FDG Intense left upper lobe mass P FDG Moderate-Intense right middle lobe nodule P FDG Intense lingular nodule P FDG Intense right lower lobe nodule P FDG Intense left lower lobe nodule P FDG Intense left lower lobe nodule P FDG Intense right hilar lymph node P FDG Minimal left lower lobe nodule P FDG Minimal left upper lobe nodule 64

72 Appendix B MVD Affinity Propagation PET Segmentation Cookbook In this appendix, we walk through how to use the MVD, focusing on the CAD and AP windows. We begin by showing and loading the PET and CT images, then loading the ground truth ROIs in the ROI tool. Following that, we load and set the thoracic cavity mask, and then move on to the CAD and AP windows. The Affinity Propagation tool is located in a button on the computer assisted detection (CAD) tool. As such, it works closely with the CAD tool and even outputs a results table using the CAD window outputs. B.1 Preparation Follow the numbers in the figures. The items try to match them as best they are able. 1. First, select or give the path to the Phillips CT in the main window as shown in Figure B Select or give the path to the Phillips PET in the main window. 3. Click the Load button. Allow a small amount of time for the image data to load. 4. Click the ROI button on the main window. 5. Optional: In the new window that pops up in Figure B.2 on the following page, enter or select the file containing any ground truth ROI. 6. Click the Load button in the ROI window. 7. In the bottom portion of the ROI window, select a Thoracic Cavity mask (.iroi file). Do this by entering the file path or select using a GUI using the folder button on the right of the field.

73 66 Figure B.1: Load the CT and PET images in the MVD main window. Ideally, both come from the free-breathing PET case study. Figure B.2: ROI window as shown after entering and loading all the relevant files and data. 8. Push the load button on the lower portion of the window. 9. Right click on the ROI that contains the Thoracic Cavity mask and click the Set as mask

74 67 Figure B.3: Set the thoracic cavity mask in the bottom half of Master:ROI window. option highlighted in Figure B.3. You may now choose to close the ROI window. 10. Go back to the Main window, and click the CAD button shown in Figure B In the new window that pops up, shown in Figure B.2, click the Affinity Propagation button. 12. Review the parameters to make sure they match expectation, then push the Execute All button. 13. Optional: Users may also run each step individually. Buttons shown in Figure B.4 on the following page may be used in the following order. (a) Generate Histogram (b) KDE Smooth (c) Exp Smooth (d) Affinity Propagation (e) Create ROIs Order is important, though users can skip any step after histogram generation. 14. A table of results is displayed in the CAD window as in Figure B.5 on the next page, where the user can interactively review them. Users are able to see them on the slicers by right clicking on an ROI and selecting Goto Center from the drop-down context menu, as shown in Figure B Exporting results to a file where they can be used in the VNS is also quite simple. Using the CAD window: Click Save As and navigate to the folder to save in, and enter a file name. Alternatively enter a location and filename using the Candidate ROI File input, and click Save.

75 68 Figure B.4: The Affinity Propagation Window. Left shows the parameter settings, right shows the Testing tab with histogram display. Figure B.5: Screenshot of the CAD window after Affinity Propagation threshold selection. The context menu to review a specific detection/segmentation result is shown in the middle. This exports the items to a file in a VNS-compatible format. These can also be loaded in the MVD s ROI tool for further inspection or manipulation using the other available segmentation tools.

Segmentation and Grouping

Segmentation and Grouping Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation

More information

Automated segmentation methods for liver analysis in oncology applications

Automated segmentation methods for liver analysis in oncology applications University of Szeged Department of Image Processing and Computer Graphics Automated segmentation methods for liver analysis in oncology applications Ph. D. Thesis László Ruskó Thesis Advisor Dr. Antal

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

Improving Positron Emission Tomography Imaging with Machine Learning David Fan-Chung Hsu CS 229 Fall

Improving Positron Emission Tomography Imaging with Machine Learning David Fan-Chung Hsu CS 229 Fall Improving Positron Emission Tomography Imaging with Machine Learning David Fan-Chung Hsu (fcdh@stanford.edu), CS 229 Fall 2014-15 1. Introduction and Motivation High- resolution Positron Emission Tomography

More information

Points Lines Connected points X-Y Scatter. X-Y Matrix Star Plot Histogram Box Plot. Bar Group Bar Stacked H-Bar Grouped H-Bar Stacked

Points Lines Connected points X-Y Scatter. X-Y Matrix Star Plot Histogram Box Plot. Bar Group Bar Stacked H-Bar Grouped H-Bar Stacked Plotting Menu: QCExpert Plotting Module graphs offers various tools for visualization of uni- and multivariate data. Settings and options in different types of graphs allow for modifications and customizations

More information

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

Robust PDF Table Locator

Robust PDF Table Locator Robust PDF Table Locator December 17, 2016 1 Introduction Data scientists rely on an abundance of tabular data stored in easy-to-machine-read formats like.csv files. Unfortunately, most government records

More information

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS 130 CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS A mass is defined as a space-occupying lesion seen in more than one projection and it is described by its shapes and margin

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear

More information

CS 664 Segmentation. Daniel Huttenlocher

CS 664 Segmentation. Daniel Huttenlocher CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical

More information

Brilliance CT Big Bore.

Brilliance CT Big Bore. 1 2 2 There are two methods of RCCT acquisition in widespread clinical use: cine axial and helical. In RCCT with cine axial acquisition, repeat CT images are taken each couch position while recording respiration.

More information

MR IMAGE SEGMENTATION

MR IMAGE SEGMENTATION MR IMAGE SEGMENTATION Prepared by : Monil Shah What is Segmentation? Partitioning a region or regions of interest in images such that each region corresponds to one or more anatomic structures Classification

More information

Modern Medical Image Analysis 8DC00 Exam

Modern Medical Image Analysis 8DC00 Exam Parts of answers are inside square brackets [... ]. These parts are optional. Answers can be written in Dutch or in English, as you prefer. You can use drawings and diagrams to support your textual answers.

More information

CRF Based Point Cloud Segmentation Jonathan Nation

CRF Based Point Cloud Segmentation Jonathan Nation CRF Based Point Cloud Segmentation Jonathan Nation jsnation@stanford.edu 1. INTRODUCTION The goal of the project is to use the recently proposed fully connected conditional random field (CRF) model to

More information

Part 3: Image Processing

Part 3: Image Processing Part 3: Image Processing Image Filtering and Segmentation Georgy Gimel farb COMPSCI 373 Computer Graphics and Image Processing 1 / 60 1 Image filtering 2 Median filtering 3 Mean filtering 4 Image segmentation

More information

8/3/2017. Contour Assessment for Quality Assurance and Data Mining. Objective. Outline. Tom Purdie, PhD, MCCPM

8/3/2017. Contour Assessment for Quality Assurance and Data Mining. Objective. Outline. Tom Purdie, PhD, MCCPM Contour Assessment for Quality Assurance and Data Mining Tom Purdie, PhD, MCCPM Objective Understand the state-of-the-art in contour assessment for quality assurance including data mining-based techniques

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

Classification of Subject Motion for Improved Reconstruction of Dynamic Magnetic Resonance Imaging

Classification of Subject Motion for Improved Reconstruction of Dynamic Magnetic Resonance Imaging 1 CS 9 Final Project Classification of Subject Motion for Improved Reconstruction of Dynamic Magnetic Resonance Imaging Feiyu Chen Department of Electrical Engineering ABSTRACT Subject motion is a significant

More information

Biomedical Image Analysis. Point, Edge and Line Detection

Biomedical Image Analysis. Point, Edge and Line Detection Biomedical Image Analysis Point, Edge and Line Detection Contents: Point and line detection Advanced edge detection: Canny Local/regional edge processing Global processing: Hough transform BMIA 15 V. Roth

More information

Multimodality Imaging for Tumor Volume Definition in Radiation Oncology

Multimodality Imaging for Tumor Volume Definition in Radiation Oncology 81 There are several commercial and academic software tools that support different segmentation algorithms. In general, commercial software packages have better implementation (with a user-friendly interface

More information

Integrated System for Planning Peripheral Bronchoscopic Procedures

Integrated System for Planning Peripheral Bronchoscopic Procedures Integrated System for Planning Peripheral Bronchoscopic Procedures Jason D. Gibbs, Michael W. Graham, Kun-Chang Yu, and William E. Higgins Penn State University Dept. of Electrical Engineering University

More information

Regression III: Advanced Methods

Regression III: Advanced Methods Lecture 3: Distributions Regression III: Advanced Methods William G. Jacoby Michigan State University Goals of the lecture Examine data in graphical form Graphs for looking at univariate distributions

More information

How and what do we see? Segmentation and Grouping. Fundamental Problems. Polyhedral objects. Reducing the combinatorics of pose estimation

How and what do we see? Segmentation and Grouping. Fundamental Problems. Polyhedral objects. Reducing the combinatorics of pose estimation Segmentation and Grouping Fundamental Problems ' Focus of attention, or grouping ' What subsets of piels do we consider as possible objects? ' All connected subsets? ' Representation ' How do we model

More information

Modeling and preoperative planning for kidney surgery

Modeling and preoperative planning for kidney surgery Modeling and preoperative planning for kidney surgery Refael Vivanti Computer Aided Surgery and Medical Image Processing Lab Hebrew University of Jerusalem, Israel Advisor: Prof. Leo Joskowicz Clinical

More information

Adaptive Fuzzy Connectedness-Based Medical Image Segmentation

Adaptive Fuzzy Connectedness-Based Medical Image Segmentation Adaptive Fuzzy Connectedness-Based Medical Image Segmentation Amol Pednekar Ioannis A. Kakadiaris Uday Kurkure Visual Computing Lab, Dept. of Computer Science, Univ. of Houston, Houston, TX, USA apedneka@bayou.uh.edu

More information

The Anatomical Equivalence Class Formulation and its Application to Shape-based Computational Neuroanatomy

The Anatomical Equivalence Class Formulation and its Application to Shape-based Computational Neuroanatomy The Anatomical Equivalence Class Formulation and its Application to Shape-based Computational Neuroanatomy Sokratis K. Makrogiannis, PhD From post-doctoral research at SBIA lab, Department of Radiology,

More information

Learning-based Neuroimage Registration

Learning-based Neuroimage Registration Learning-based Neuroimage Registration Leonid Teverovskiy and Yanxi Liu 1 October 2004 CMU-CALD-04-108, CMU-RI-TR-04-59 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract

More information

Prostate Detection Using Principal Component Analysis

Prostate Detection Using Principal Component Analysis Prostate Detection Using Principal Component Analysis Aamir Virani (avirani@stanford.edu) CS 229 Machine Learning Stanford University 16 December 2005 Introduction During the past two decades, computed

More information

Mirrored LH Histograms for the Visualization of Material Boundaries

Mirrored LH Histograms for the Visualization of Material Boundaries Mirrored LH Histograms for the Visualization of Material Boundaries Petr Šereda 1, Anna Vilanova 1 and Frans A. Gerritsen 1,2 1 Department of Biomedical Engineering, Technische Universiteit Eindhoven,

More information

I How does the formulation (5) serve the purpose of the composite parameterization

I How does the formulation (5) serve the purpose of the composite parameterization Supplemental Material to Identifying Alzheimer s Disease-Related Brain Regions from Multi-Modality Neuroimaging Data using Sparse Composite Linear Discrimination Analysis I How does the formulation (5)

More information

CS 664 Slides #11 Image Segmentation. Prof. Dan Huttenlocher Fall 2003

CS 664 Slides #11 Image Segmentation. Prof. Dan Huttenlocher Fall 2003 CS 664 Slides #11 Image Segmentation Prof. Dan Huttenlocher Fall 2003 Image Segmentation Find regions of image that are coherent Dual of edge detection Regions vs. boundaries Related to clustering problems

More information

Is deformable image registration a solved problem?

Is deformable image registration a solved problem? Is deformable image registration a solved problem? Marcel van Herk On behalf of the imaging group of the RT department of NKI/AVL Amsterdam, the Netherlands DIR 1 Image registration Find translation.deformation

More information

PMOD Features dedicated to Oncology Research

PMOD Features dedicated to Oncology Research While brain research using dynamic data has always been a main target of PMOD s developments, many scientists working with static oncology data have also found ways to leverage PMOD s unique functionality.

More information

RADIOMICS: potential role in the clinics and challenges

RADIOMICS: potential role in the clinics and challenges 27 giugno 2018 Dipartimento di Fisica Università degli Studi di Milano RADIOMICS: potential role in the clinics and challenges Dr. Francesca Botta Medical Physicist Istituto Europeo di Oncologia (Milano)

More information

Classification of Abdominal Tissues by k-means Clustering for 3D Acoustic and Shear-Wave Modeling

Classification of Abdominal Tissues by k-means Clustering for 3D Acoustic and Shear-Wave Modeling 1 Classification of Abdominal Tissues by k-means Clustering for 3D Acoustic and Shear-Wave Modeling Kevin T. Looby klooby@stanford.edu I. ABSTRACT Clutter is an effect that degrades the quality of medical

More information

Medical images, segmentation and analysis

Medical images, segmentation and analysis Medical images, segmentation and analysis ImageLab group http://imagelab.ing.unimo.it Università degli Studi di Modena e Reggio Emilia Medical Images Macroscopic Dermoscopic ELM enhance the features of

More information

Computer Aided Diagnosis Based on Medical Image Processing and Artificial Intelligence Methods

Computer Aided Diagnosis Based on Medical Image Processing and Artificial Intelligence Methods International Journal of Information and Computation Technology. ISSN 0974-2239 Volume 3, Number 9 (2013), pp. 887-892 International Research Publications House http://www. irphouse.com /ijict.htm Computer

More information

3D Human Airway Segmentation for Virtual Bronchoscopy

3D Human Airway Segmentation for Virtual Bronchoscopy 3D Human Airway Segmentation for Virtual Bronchoscopy Atilla P. Kiraly, 1 William E. Higgins, 1,2 Eric A. Hoffman, 2 Geoffrey McLennan, 2 and Joseph M. Reinhardt 2 1 Penn State University, University Park,

More information

MEDICAL IMAGE NOISE REDUCTION AND REGION CONTRAST ENHANCEMENT USING PARTIAL DIFFERENTIAL EQUATIONS

MEDICAL IMAGE NOISE REDUCTION AND REGION CONTRAST ENHANCEMENT USING PARTIAL DIFFERENTIAL EQUATIONS MEDICAL IMAGE NOISE REDUCTION AND REGION CONTRAST ENHANCEMENT USING PARTIAL DIFFERENTIAL EQUATIONS Miguel Alemán-Flores, Luis Álvarez-León Departamento de Informática y Sistemas, Universidad de Las Palmas

More information

Object Identification in Ultrasound Scans

Object Identification in Ultrasound Scans Object Identification in Ultrasound Scans Wits University Dec 05, 2012 Roadmap Introduction to the problem Motivation Related Work Our approach Expected Results Introduction Nowadays, imaging devices like

More information

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Classification Vladimir Curic Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Outline An overview on classification Basics of classification How to choose appropriate

More information

UvA-DARE (Digital Academic Repository) Motion compensation for 4D PET/CT Kruis, M.F. Link to publication

UvA-DARE (Digital Academic Repository) Motion compensation for 4D PET/CT Kruis, M.F. Link to publication UvA-DARE (Digital Academic Repository) Motion compensation for 4D PET/CT Kruis, M.F. Link to publication Citation for published version (APA): Kruis, M. F. (2014). Motion compensation for 4D PET/CT General

More information

Digital Image Classification Geography 4354 Remote Sensing

Digital Image Classification Geography 4354 Remote Sensing Digital Image Classification Geography 4354 Remote Sensing Lab 11 Dr. James Campbell December 10, 2001 Group #4 Mark Dougherty Paul Bartholomew Akisha Williams Dave Trible Seth McCoy Table of Contents:

More information

Improvement of contrast using reconstruction of 3D Image by PET /CT combination system

Improvement of contrast using reconstruction of 3D Image by PET /CT combination system Available online at www.pelagiaresearchlibrary.com Advances in Applied Science Research, 2013, 4(1):285-290 ISSN: 0976-8610 CODEN (USA): AASRFC Improvement of contrast using reconstruction of 3D Image

More information

82 REGISTRATION OF RETINOGRAPHIES

82 REGISTRATION OF RETINOGRAPHIES 82 REGISTRATION OF RETINOGRAPHIES 3.3 Our method Our method resembles the human approach to image matching in the sense that we also employ as guidelines features common to both images. It seems natural

More information

Chapter 4. Clustering Core Atoms by Location

Chapter 4. Clustering Core Atoms by Location Chapter 4. Clustering Core Atoms by Location In this chapter, a process for sampling core atoms in space is developed, so that the analytic techniques in section 3C can be applied to local collections

More information

Weakly Supervised Fully Convolutional Network for PET Lesion Segmentation

Weakly Supervised Fully Convolutional Network for PET Lesion Segmentation Weakly Supervised Fully Convolutional Network for PET Lesion Segmentation S. Afshari a, A. BenTaieb a, Z. Mirikharaji a, and G. Hamarneh a a Medical Image Analysis Lab, School of Computing Science, Simon

More information

Image Segmentation. Shengnan Wang

Image Segmentation. Shengnan Wang Image Segmentation Shengnan Wang shengnan@cs.wisc.edu Contents I. Introduction to Segmentation II. Mean Shift Theory 1. What is Mean Shift? 2. Density Estimation Methods 3. Deriving the Mean Shift 4. Mean

More information

Logical Templates for Feature Extraction in Fingerprint Images

Logical Templates for Feature Extraction in Fingerprint Images Logical Templates for Feature Extraction in Fingerprint Images Bir Bhanu, Michael Boshra and Xuejun Tan Center for Research in Intelligent Systems University of Califomia, Riverside, CA 9252 1, USA Email:

More information

Machine Learning for Pre-emptive Identification of Performance Problems in UNIX Servers Helen Cunningham

Machine Learning for Pre-emptive Identification of Performance Problems in UNIX Servers Helen Cunningham Final Report for cs229: Machine Learning for Pre-emptive Identification of Performance Problems in UNIX Servers Helen Cunningham Abstract. The goal of this work is to use machine learning to understand

More information

Level-set MCMC Curve Sampling and Geometric Conditional Simulation

Level-set MCMC Curve Sampling and Geometric Conditional Simulation Level-set MCMC Curve Sampling and Geometric Conditional Simulation Ayres Fan John W. Fisher III Alan S. Willsky February 16, 2007 Outline 1. Overview 2. Curve evolution 3. Markov chain Monte Carlo 4. Curve

More information

doi: /

doi: / Yiting Xie ; Anthony P. Reeves; Single 3D cell segmentation from optical CT microscope images. Proc. SPIE 934, Medical Imaging 214: Image Processing, 9343B (March 21, 214); doi:1.1117/12.243852. (214)

More information

Ensemble registration: Combining groupwise registration and segmentation

Ensemble registration: Combining groupwise registration and segmentation PURWANI, COOTES, TWINING: ENSEMBLE REGISTRATION 1 Ensemble registration: Combining groupwise registration and segmentation Sri Purwani 1,2 sri.purwani@postgrad.manchester.ac.uk Tim Cootes 1 t.cootes@manchester.ac.uk

More information

arxiv: v1 [cs.cv] 6 Jun 2017

arxiv: v1 [cs.cv] 6 Jun 2017 Volume Calculation of CT lung Lesions based on Halton Low-discrepancy Sequences Liansheng Wang a, Shusheng Li a, and Shuo Li b a Department of Computer Science, Xiamen University, Xiamen, China b Dept.

More information

Kernel Density Estimation (KDE)

Kernel Density Estimation (KDE) Kernel Density Estimation (KDE) Previously, we ve seen how to use the histogram method to infer the probability density function (PDF) of a random variable (population) using a finite data sample. In this

More information

[Programming Assignment] (1)

[Programming Assignment] (1) http://crcv.ucf.edu/people/faculty/bagci/ [Programming Assignment] (1) Computer Vision Dr. Ulas Bagci (Fall) 2015 University of Central Florida (UCF) Coding Standard and General Requirements Code for all

More information

Using the Deformable Part Model with Autoencoded Feature Descriptors for Object Detection

Using the Deformable Part Model with Autoencoded Feature Descriptors for Object Detection Using the Deformable Part Model with Autoencoded Feature Descriptors for Object Detection Hyunghoon Cho and David Wu December 10, 2010 1 Introduction Given its performance in recent years' PASCAL Visual

More information

Segmenting Lesions in Multiple Sclerosis Patients James Chen, Jason Su

Segmenting Lesions in Multiple Sclerosis Patients James Chen, Jason Su Segmenting Lesions in Multiple Sclerosis Patients James Chen, Jason Su Radiologists and researchers spend countless hours tediously segmenting white matter lesions to diagnose and study brain diseases.

More information

University of Florida CISE department Gator Engineering. Clustering Part 4

University of Florida CISE department Gator Engineering. Clustering Part 4 Clustering Part 4 Dr. Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida, Gainesville DBSCAN DBSCAN is a density based clustering algorithm Density = number of

More information

RT_Image v0.2β User s Guide

RT_Image v0.2β User s Guide RT_Image v0.2β User s Guide RT_Image is a three-dimensional image display and analysis suite developed in IDL (ITT, Boulder, CO). It offers a range of flexible tools for the visualization and quantitation

More information

Deviceless respiratory motion correction in PET imaging exploring the potential of novel data driven strategies

Deviceless respiratory motion correction in PET imaging exploring the potential of novel data driven strategies g Deviceless respiratory motion correction in PET imaging exploring the potential of novel data driven strategies Presented by Adam Kesner, Ph.D., DABR Assistant Professor, Division of Radiological Sciences,

More information

Ulrik Söderström 16 Feb Image Processing. Segmentation

Ulrik Söderström 16 Feb Image Processing. Segmentation Ulrik Söderström ulrik.soderstrom@tfe.umu.se 16 Feb 2011 Image Processing Segmentation What is Image Segmentation? To be able to extract information from an image it is common to subdivide it into background

More information

Medical Image Registration by Maximization of Mutual Information

Medical Image Registration by Maximization of Mutual Information Medical Image Registration by Maximization of Mutual Information EE 591 Introduction to Information Theory Instructor Dr. Donald Adjeroh Submitted by Senthil.P.Ramamurthy Damodaraswamy, Umamaheswari Introduction

More information

Limitations of Projection Radiography. Stereoscopic Breast Imaging. Limitations of Projection Radiography. 3-D Breast Imaging Methods

Limitations of Projection Radiography. Stereoscopic Breast Imaging. Limitations of Projection Radiography. 3-D Breast Imaging Methods Stereoscopic Breast Imaging Andrew D. A. Maidment, Ph.D. Chief, Physics Section Department of Radiology University of Pennsylvania Limitations of Projection Radiography Mammography is a projection imaging

More information

Clustering Part 4 DBSCAN

Clustering Part 4 DBSCAN Clustering Part 4 Dr. Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida, Gainesville DBSCAN DBSCAN is a density based clustering algorithm Density = number of

More information

6. Object Identification L AK S H M O U. E D U

6. Object Identification L AK S H M O U. E D U 6. Object Identification L AK S H M AN @ O U. E D U Objects Information extracted from spatial grids often need to be associated with objects not just an individual pixel Group of pixels that form a real-world

More information

CITS 4402 Computer Vision

CITS 4402 Computer Vision CITS 4402 Computer Vision A/Prof Ajmal Mian Adj/A/Prof Mehdi Ravanbakhsh, CEO at Mapizy (www.mapizy.com) and InFarm (www.infarm.io) Lecture 02 Binary Image Analysis Objectives Revision of image formation

More information

Edge-Preserving Denoising for Segmentation in CT-Images

Edge-Preserving Denoising for Segmentation in CT-Images Edge-Preserving Denoising for Segmentation in CT-Images Eva Eibenberger, Anja Borsdorf, Andreas Wimmer, Joachim Hornegger Lehrstuhl für Mustererkennung, Friedrich-Alexander-Universität Erlangen-Nürnberg

More information

CHAPTER 6 IDENTIFICATION OF CLUSTERS USING VISUAL VALIDATION VAT ALGORITHM

CHAPTER 6 IDENTIFICATION OF CLUSTERS USING VISUAL VALIDATION VAT ALGORITHM 96 CHAPTER 6 IDENTIFICATION OF CLUSTERS USING VISUAL VALIDATION VAT ALGORITHM Clustering is the process of combining a set of relevant information in the same group. In this process KM algorithm plays

More information

Engineering Problem and Goal

Engineering Problem and Goal Engineering Problem and Goal Engineering Problem: Traditional active contour models can not detect edges or convex regions in noisy images. Engineering Goal: The goal of this project is to design an algorithm

More information

DUE to beam polychromacity in CT and the energy dependence

DUE to beam polychromacity in CT and the energy dependence 1 Empirical Water Precorrection for Cone-Beam Computed Tomography Katia Sourbelle, Marc Kachelrieß, Member, IEEE, and Willi A. Kalender Abstract We propose an algorithm to correct for the cupping artifact

More information

Image Processing

Image Processing Image Processing 159.731 Canny Edge Detection Report Syed Irfanullah, Azeezullah 00297844 Danh Anh Huynh 02136047 1 Canny Edge Detection INTRODUCTION Edges Edges characterize boundaries and are therefore

More information

Chapter 3: Intensity Transformations and Spatial Filtering

Chapter 3: Intensity Transformations and Spatial Filtering Chapter 3: Intensity Transformations and Spatial Filtering 3.1 Background 3.2 Some basic intensity transformation functions 3.3 Histogram processing 3.4 Fundamentals of spatial filtering 3.5 Smoothing

More information

Prototype of Silver Corpus Merging Framework

Prototype of Silver Corpus Merging Framework www.visceral.eu Prototype of Silver Corpus Merging Framework Deliverable number D3.3 Dissemination level Public Delivery data 30.4.2014 Status Authors Final Markus Krenn, Allan Hanbury, Georg Langs This

More information

Statistical Analysis of Metabolomics Data. Xiuxia Du Department of Bioinformatics & Genomics University of North Carolina at Charlotte

Statistical Analysis of Metabolomics Data. Xiuxia Du Department of Bioinformatics & Genomics University of North Carolina at Charlotte Statistical Analysis of Metabolomics Data Xiuxia Du Department of Bioinformatics & Genomics University of North Carolina at Charlotte Outline Introduction Data pre-treatment 1. Normalization 2. Centering,

More information

Unsupervised Learning

Unsupervised Learning Unsupervised Learning Unsupervised learning Until now, we have assumed our training samples are labeled by their category membership. Methods that use labeled samples are said to be supervised. However,

More information

Dr. Ulas Bagci

Dr. Ulas Bagci CAP5415-Computer Vision Lecture 11-Image Segmentation (BASICS): Thresholding, Region Growing, Clustering Dr. Ulas Bagci bagci@ucf.edu 1 Image Segmentation Aim: to partition an image into a collection of

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear

More information

ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL

ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY BHARAT SIGINAM IN

More information

Random projection for non-gaussian mixture models

Random projection for non-gaussian mixture models Random projection for non-gaussian mixture models Győző Gidófalvi Department of Computer Science and Engineering University of California, San Diego La Jolla, CA 92037 gyozo@cs.ucsd.edu Abstract Recently,

More information

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels Edge Detection Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface

More information

Data Preprocessing. S1 Teknik Informatika Fakultas Teknologi Informasi Universitas Kristen Maranatha

Data Preprocessing. S1 Teknik Informatika Fakultas Teknologi Informasi Universitas Kristen Maranatha Data Preprocessing S1 Teknik Informatika Fakultas Teknologi Informasi Universitas Kristen Maranatha 1 Why Data Preprocessing? Data in the real world is dirty incomplete: lacking attribute values, lacking

More information

CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION

CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION 60 CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION 3.1 IMPORTANCE OF OPTIC DISC Ocular fundus images provide information about ophthalmic, retinal and even systemic diseases such as hypertension, diabetes, macular

More information

Cost Models for Query Processing Strategies in the Active Data Repository

Cost Models for Query Processing Strategies in the Active Data Repository Cost Models for Query rocessing Strategies in the Active Data Repository Chialin Chang Institute for Advanced Computer Studies and Department of Computer Science University of Maryland, College ark 272

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Computational Medical Imaging Analysis Chapter 4: Image Visualization

Computational Medical Imaging Analysis Chapter 4: Image Visualization Computational Medical Imaging Analysis Chapter 4: Image Visualization Jun Zhang Laboratory for Computational Medical Imaging & Data Analysis Department of Computer Science University of Kentucky Lexington,

More information

Morphological Image Processing

Morphological Image Processing Morphological Image Processing Binary image processing In binary images, we conventionally take background as black (0) and foreground objects as white (1 or 255) Morphology Figure 4.1 objects on a conveyor

More information

x' = c 1 x + c 2 y + c 3 xy + c 4 y' = c 5 x + c 6 y + c 7 xy + c 8

x' = c 1 x + c 2 y + c 3 xy + c 4 y' = c 5 x + c 6 y + c 7 xy + c 8 1. Explain about gray level interpolation. The distortion correction equations yield non integer values for x' and y'. Because the distorted image g is digital, its pixel values are defined only at integer

More information

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II T H E U N I V E R S I T Y of T E X A S H E A L T H S C I E N C E C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S Image Operations II For students of HI 5323

More information

Physics 736. Experimental Methods in Nuclear-, Particle-, and Astrophysics. - Statistical Methods -

Physics 736. Experimental Methods in Nuclear-, Particle-, and Astrophysics. - Statistical Methods - Physics 736 Experimental Methods in Nuclear-, Particle-, and Astrophysics - Statistical Methods - Karsten Heeger heeger@wisc.edu Course Schedule and Reading course website http://neutrino.physics.wisc.edu/teaching/phys736/

More information

Introduction to Digital Image Processing

Introduction to Digital Image Processing Fall 2005 Image Enhancement in the Spatial Domain: Histograms, Arithmetic/Logic Operators, Basics of Spatial Filtering, Smoothing Spatial Filters Tuesday, February 7 2006, Overview (1): Before We Begin

More information

CHAPTER 3 TUMOR DETECTION BASED ON NEURO-FUZZY TECHNIQUE

CHAPTER 3 TUMOR DETECTION BASED ON NEURO-FUZZY TECHNIQUE 32 CHAPTER 3 TUMOR DETECTION BASED ON NEURO-FUZZY TECHNIQUE 3.1 INTRODUCTION In this chapter we present the real time implementation of an artificial neural network based on fuzzy segmentation process

More information

CHAPTER 9 INPAINTING USING SPARSE REPRESENTATION AND INVERSE DCT

CHAPTER 9 INPAINTING USING SPARSE REPRESENTATION AND INVERSE DCT CHAPTER 9 INPAINTING USING SPARSE REPRESENTATION AND INVERSE DCT 9.1 Introduction In the previous chapters the inpainting was considered as an iterative algorithm. PDE based method uses iterations to converge

More information

Broad field that includes low-level operations as well as complex high-level algorithms

Broad field that includes low-level operations as well as complex high-level algorithms Image processing About Broad field that includes low-level operations as well as complex high-level algorithms Low-level image processing Computer vision Computational photography Several procedures and

More information

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides

More information

Fast 3D Mean Shift Filter for CT Images

Fast 3D Mean Shift Filter for CT Images Fast 3D Mean Shift Filter for CT Images Gustavo Fernández Domínguez, Horst Bischof, and Reinhard Beichel Institute for Computer Graphics and Vision, Graz University of Technology Inffeldgasse 16/2, A-8010,

More information

The organization of the human cerebral cortex estimated by intrinsic functional connectivity

The organization of the human cerebral cortex estimated by intrinsic functional connectivity 1 The organization of the human cerebral cortex estimated by intrinsic functional connectivity Journal: Journal of Neurophysiology Author: B. T. Thomas Yeo, et al Link: https://www.ncbi.nlm.nih.gov/pubmed/21653723

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information