Fully Automatic Model Creation for Object Localization utilizing the Generalized Hough Transform

Similar documents
Bone Age Classification Using the Discriminative Generalized Hough Transform

Edge-Preserving Denoising for Segmentation in CT-Images

Simultaneous Model-based Segmentation of Multiple Objects

Texture-Based Detection of Myositis in Ultrasonographies

Semi-Automatic Detection of Cervical Vertebrae in X-ray Images Using Generalized Hough Transform

Image Segmentation and Registration

A Non-Linear Image Registration Scheme for Real-Time Liver Ultrasound Tracking using Normalized Gradient Fields

Marginal Space Learning for Efficient Detection of 2D/3D Anatomical Structures in Medical Images

IOSR Journal of Electronics and communication Engineering (IOSR-JCE) ISSN: , ISBN: , PP:

Segmentation of 3-D medical image data sets with a combination of region based initial segmentation and active surfaces

Segmentation of Bony Structures with Ligament Attachment Sites

Intraoperative Prostate Tracking with Slice-to-Volume Registration in MR

Robust Ring Detection In Phase Correlation Surfaces

3D Statistical Shape Model Building using Consistent Parameterization

Automated Model-Based Rib Cage Segmentation and Labeling in CT Images

Prototype of Silver Corpus Merging Framework

Time-of-Flight Surface De-noising through Spectral Decomposition

Generation of Triangle Meshes from Time-of-Flight Data for Surface Registration

Improvement and Evaluation of a Time-of-Flight-based Patient Positioning System

Iterative CT Reconstruction Using Curvelet-Based Regularization

The Anatomical Equivalence Class Formulation and its Application to Shape-based Computational Neuroanatomy

Bioimage Informatics

Towards full-body X-ray images

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences

Development of 3D Model-based Morphometric Method for Assessment of Human Weight-bearing Joint. Taeho Kim

Medical Image Analysis Active Shape Models

Machine Learning for Medical Image Analysis. A. Criminisi

Automatic segmentation of the cortical grey and white matter in MRI using a Region Growing approach based on anatomical knowledge

Fully Automatic Multi-organ Segmentation based on Multi-boost Learning and Statistical Shape Model Search

Methodological progress in image registration for ventilation estimation, segmentation propagation and multi-modal fusion

Manifold Learning-based Data Sampling for Model Training

Semantic Context Forests for Learning- Based Knee Cartilage Segmentation in 3D MR Images

A Radiometry Tolerant Method for Direct 3D/2D Registration of Computed Tomography Data to X-ray Images

Universities of Leeds, Sheffield and York

22 October, 2012 MVA ENS Cachan. Lecture 5: Introduction to generative models Iasonas Kokkinos

Patch-based Object Recognition. Basic Idea

Mass Classification Method in Mammogram Using Fuzzy K-Nearest Neighbour Equality

Automated Lesion Detection Methods for 2D and 3D Chest X-Ray Images

Sampling-Based Ensemble Segmentation against Inter-operator Variability

Detection of Rib Shadows in Digital Chest Radiographs

Automated Crystal Structure Identification from X-ray Diffraction Patterns

Norbert Schuff VA Medical Center and UCSF

Using Probability Maps for Multi organ Automatic Segmentation

Assessing the Skeletal Age From a Hand Radiograph: Automating the Tanner-Whitehouse Method

Motion artifact detection in four-dimensional computed tomography images

Scene-Based Segmentation of Multiple Muscles from MRI in MITK

Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model

Bioimage Informatics

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Hierarchical Multi structure Segmentation Guided by Anatomical Correlations

A Multiple-Layer Flexible Mesh Template Matching Method for Nonrigid Registration between a Pelvis Model and CT Images

HOUGH TRANSFORM CS 6350 C V

Best First and Greedy Search Based CFS and Naïve Bayes Algorithms for Hepatitis Diagnosis

Detection & Classification of Lung Nodules Using multi resolution MTANN in Chest Radiography Images

GENERAL AUTOMATED FLAW DETECTION SCHEME FOR NDE X-RAY IMAGES

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation

doi: /

Towards an Estimation of Acoustic Impedance from Multiple Ultrasound Images

Learning Algorithms for Medical Image Analysis. Matteo Santoro slipguru

Extracting consistent and manifold interfaces from multi-valued volume data sets

Fast Interactive Region of Interest Selection for Volume Visualization

Computer Aided Diagnosis Based on Medical Image Processing and Artificial Intelligence Methods

AN essential part of any computer-aided surgery is planning

Vertebrae Segmentation in 3D CT Images based on a Variational Framework

Lecture 13 Theory of Registration. ch. 10 of Insight into Images edited by Terry Yoo, et al. Spring (CMU RI) : BioE 2630 (Pitt)

Adaptive Local Multi-Atlas Segmentation: Application to Heart Segmentation in Chest CT Scans

Robust Linear Registration of CT images using Random Regression Forests

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

Automatic Mitral Valve Inflow Measurements from Doppler Echocardiography

Iterative Closest Point Algorithm in the Presence of Anisotropic Noise

CS 223B Computer Vision Problem Set 3

An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy

Geometry-Based Optic Disk Tracking in Retinal Fundus Videos

Modern Medical Image Analysis 8DC00 Exam

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

Object Identification in Ultrasound Scans

Medical Image Segmentation

2/15/2009. Part-Based Models. Andrew Harp. Part Based Models. Detect object from physical arrangement of individual features

Deformable Segmentation using Sparse Shape Representation. Shaoting Zhang

Segmentation of Fat and Fascias in Canine Ultrasound Images

Left Ventricle Endocardium Segmentation for Cardiac CT Volumes Using an Optimal Smooth Surface

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Prostate Detection Using Principal Component Analysis

An Automated Initialization System for Robust Model-Based Segmentation of Lungs in CT Data

CHAPTER 2. Morphometry on rodent brains. A.E.H. Scheenstra J. Dijkstra L. van der Weerd

Iterative fully convolutional neural networks for automatic vertebra segmentation

MEDICAL IMAGE NOISE REDUCTION AND REGION CONTRAST ENHANCEMENT USING PARTIAL DIFFERENTIAL EQUATIONS

Announcements. Recognition (Part 3) Model-Based Vision. A Rough Recognition Spectrum. Pose consistency. Recognition by Hypothesize and Test

DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION

Kapitel 4: Clustering

Estimating Human Pose in Images. Navraj Singh December 11, 2009

Adaptive Branch Tracing and Image Sharpening for Airway Tree Extraction in 3-D Chest CT

CHAPTER-4 LOCALIZATION AND CONTOUR DETECTION OF OPTIC DISK

Model-Based Lower Limb Segmentation using Weighted Multiple Candidates

MR IMAGE SEGMENTATION

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

Recognition (Part 4) Introduction to Computer Vision CSE 152 Lecture 17

Ensemble registration: Combining groupwise registration and segmentation

Whole Body MRI Intensity Standardization

On-line, Incremental Learning of a Robust Active Shape Model

Transcription:

Fully Automatic Model Creation for Object Localization utilizing the Generalized Hough Transform Heike Ruppertshofen 1,2,3, Cristian Lorenz 2, Peter Beyerlein 4, Zein Salah 3, Georg Rose 3, Hauke Schramm 1 1 Institut für Angewandte Informatik, Fachhochschule Kiel 2 Sector Medical Imaging Systems, Philips Research Europe - Hamburg 3 Institut für Elektronik, Signalverarbeitung und Kommunikationstechnik, Otto-von-Guericke-Universität Magdeburg 4 Fachbereich Ingenieurwesen, Technische Fachhochschule Wildau heike.ruppertshofen@fh-kiel.de Abstract. An approach for automatic object localization in medical images utilizing an extended version of the generalized Hough transform (GHT) is presented. In our approach the shape model in the GHT is equipped with specific model point weights, which are used in the voting process. The weights are adjusted in a discriminative training procedure, which aims at a minimal localization error of the GHT. Furthermore, these weights yield information about the importance of points, such that non-discriminative points can be excluded from the model. Thereby, the size of the model is decreased, thus reducing processing time. The algorithm is presented in 2D but should be easily extendable for 3D images. 1 Introduction The task of object localization in medical images is of high interest in many applications in the area of computer-aided diagnosis, e.g., for a fully-automatic segmentation of organs or bones. For model-based segmentation an initial positioning of the model in the image is required. This initialization is often carried out manually or specialized solutions are developed for the particular problem at hand using, e.g., grayvalue thresholding, morphological operators or anatomical knowledge [1]. More advanced, automatic approaches are given by an atlas-based registration [2] or an initial global search of the object conducted, e.g., with evolutionary algorithms [3], template matching [4] or the generalized Hough transform (GHT) [5]. In this work, we will focus on the GHT, which, in general, is a robust method and therefore well suited even for the localization of partially occluded or noisy objects. However, one drawback is the high computational demand given by the vast global search, as it is the case for all aforementioned automatic procedures. To reduce its computational complexity two measures have been undertaken. On the one hand, we will not search for scaled or rotated occurrences of our object;

282 Ruppertshofen et al. but instead, will try to include this variation into our model. On the other hand, a weight will be assigned to each model point relative to its importance such that nonrelevant points can be removed from the model; thus reducing model size and runtime. Model point weights are determined through a log-linear combination of model point knowledge and a discriminative training [6] with respect to a minimal localization error. 2 Materials and Methods 2.1 Generalized Hough Transform The GHT [7] is a model-based method for object localization utilizing a voting process to identify the likeliest position of the object of interest. To this end, the image space is mapped to a transformation parameter space, known as Hough space, which depicts possible locations of the object. The shape model of the object is represented by vectors from each model point to a given reference point, usually the center of the model. Furthermore, directional information obtained from the gradient direction is added to each point. This information is stored in a look-up-table, the so called r-table, where model points with similar gradient directions are grouped together. To perform the localization, an edge image is created from the original image. For each edge point e i the model points m j with similar gradient direction are retrieved from the r-table. Through the relation c = e i m j the position c of a possible reference point is determined and the vote in the corresponding cell in the Hough space is increased by the weight of the model point m j. At the end of the voting process the Hough space is searched and the cell with the highest vote is returned as the assumed position of the object. The GHT can be extended to find rotated or scaled occurrences of the object of interest as well. However, due to a vast rise in complexity these extensions will not be considered here. The model used in our extended GHT is built by extracting a number of edge points from a predefined region of interest around the reference point, which avoids the manual generation of a shape model. Yet, the model may contain points, which are not relevant for object localization. To reduce its size, the importance of model points is determined using a discriminative training algorithm as described below and points with low importance are eliminated thereafter. 2.2 Discriminative Training of Model Point Weights Let us regard the model points of our initial model as individual sources of knowledge via their contribution to the Hough space. The aim of the training approach is to determine the importance of points which is achieved by an adequate combination of knowledge sources and a minimal localization error criterion. For the derivation of the training approach, we have to take a probabilistic view of the GHT. Instead of searching for the maximal number of votes in the

Automatic Model Creation for Object Localization 283 Hough space, we define a posterior probability p(c i X) = N i N with c i being a cell i in the Hough space, X a set of features extracted from a given image and N i, N being the number of votes in cell i and in the complete Hough space. By identifying the cell with the highest posterior probability the localization task is now realized with a Bayesian classifier ĉ = argmax ci p(c i X). The posterior probability of the Hough cells can be further divided into contributions from the individual model points (1) p j (c i X) = N i,j N j (2) where N i,j and N j are the number of votes cast by model point j. A recombination of the model point posteriors can be achieved through a log-linear modeling following the maximum entropy principle [8] ) exp( j λ j logp j (c i X) p Λ (c i X) = ( ) (3) k exp j λ j logp j (c k X) The coefficients Λ = {λ j } j are the model point weights, which need to be estimated. To this end an error function E is defined, which accumulates the localization error on the different images X n E(Λ) = n p Λ (c i X n ) η ε( c n,c i ) k p Λ(c k X n ) η (4) i Here, the Euclidian distance of the correct cell c n to a cell in the Hough space c i is chosen as error measure ε weighted by an indicator function. The exponent η in the indicator function controls the influence of the alternative hypotheses c k on the error measure. By applying a gradient descent scheme to (4) the optimal weights λ j can be determined with respect to a minimal localization error [6]. 2.3 Material and Design of Experiments The algorithm was tested on a set of 30 thorax radiographs, which were acquired in a follow-up study of 11 patients. The images have an isotropic resolution of 0.185 mm with varying image sizes. To evade noise in the training process, the images were downsampled twice with a Gaussian filter to a spacing of 0.74mm. Thegiventaskistolocalizethecollarboneinallimages. Forthispurposethe intersection of collar bone and lung wall was annotated to obtain a ground-truth for training as well as evaluation of the algorithm. The dataset was divided into a training and a test dataset, each consisting of 15 images. The model used for the localization task was created from a set of 8 training images.

284 Ruppertshofen et al. Table 1. The table shows a comparison of the initial and the final model regarding the number of model points, the average runtime per image in minutes and the localization error on the training and test data in mm displayed as mean ± std (max). model size runtime error on training data error on test data initial model 2684 1.2 2.2 ± 2.1 (6.9) 3.8 ± 2.6 (10.2) final model 920 0.7 1.9 ± 2.1 (8.2) 3.0 ± 2.1 (6.7) Using this model the GHT is performed on the training images with equal weights for all points, followed by the computation of the optimal model point weights as described above. Based on these weights a subset of points necessary for object localization is determined as final model. Subsequently, the weighted final model is tested on the complete dataset. Experiments were run on a desktop pc with 2.4GHz. The GHT is implemented in Matlab and not yet optimized for speed. 3 Results The initial model extracted from part of the training images and the final model resulting after the discriminative training are shown in Fig. 1. It is clearly visible that the computed point weights allow a strong reduction of the model size. Only about 35% of the initial points were kept, reducing the processing time by almost 50% as can be seen in Tab. 1. By further examining the model it becomes obvious that not only the edge of the collar bone is important for its detection; but also the lung wall and surrounding bones are of high interest. The inclusion of these structures helps to distinguish the collar bone from the ribs, which show a similar edge response. Since the initial model was extracted from the training data a very good localization result was achieved on these images, which even grew slightly better for the final model. As one could expect the error on the unknown test data is slightly higher, but should be sufficient for the initialization of a segmentation. Fig. 1. Comparison of the initial model (left) extracted from the images and the final model (middle). Shown are the point locations with corresponding gradient directions. For illustration an annotated example image is displayed right

Automatic Model Creation for Object Localization 285 4 Discussion The combination of the GHT with the discriminative training algorithm to determine adequate model point weights shows promising results. The introduction of point weights into the model brings about several advantages. First, no initial shape model needs to be extracted from the data via automatic or manual segmentation, but a number of edge points from the region of interest are sufficient for the GHT algorithm. Second, large models can be exploited during training, since the number of model points can be diminished afterwards keeping only discriminative points. In addition, neighboring structures, which may be helpful in the localization task, can easily be included in the model by extracting a larger ROI around the object of interest. Furthermore, by extracting the model directly from the dataset, the variation of the object of interest in the underlying images is captured. Thus no rotation or scaling of the model was necessary in the process of the GHT. This implies a considerable reduction of runtime and memory need. Runtime was further decreased through the reduction of model size, while the localization error slightly improved to about 3.0 mm on average. The algorithm is expected to be applicable to detect arbitrary objects, which show a clear response in the edge image. Experiments will be conducted to reveal further possible features to be able to detect objects, which are not characterized by their edge image. Future work will also include an iterative approach, where random points are gradually included in the model and evaluated by DMC to further improve localization accuracy. Acknowledgement. We thank Dr. Cornelia Schaefer-Prokop, AMC Amsterdam for the abundance of radiographs. References 1. Heimann T, van Ginneken B, Styner M, et al. Comparison and evaluation of methods for liver segmentation from CT datasets. IEEE Trans Med Imaging. 2009;28(8):1251 65. 2. Seghers D, Slagmolen P, Lambelin Y, et al. Landmark based liver segmentation using local shape and local intensity models. In: Proc MICCAI; 2007. p. 135 42. 3. Heimann T, Münziger S, Meinzer HP, et al. A shape-guided deformable model with evolutionary algorithm initialization for 3D soft tissue segmentation. In: Proc IPMI; 2007. p. 1 12. 4. Barthel D, von Berg J. Robust automatic lung field segmentation on digital chest radiographs. Int J CARS. 2009;4(Suppl 1):326 7. 5. Schramm H, Ecabert O, Peters J, et al. Towards fully automatic object detection and segmentation. In: SPIE; 2006. p. 614402. 6. Beyerlein P. Discriminative model combination. In: Proc ICASSP; 1998. p. 481 4. 7. Ballard DH. Generalizing the hough transform to detect arbitrary shapes. Pattern Recognit. 1981;13(2):111 22. 8. Jaynes ET. Information theory and statistical mechanics. Phys Rev. 1957;106(4):620 30.