Geometry-Based Optic Disk Tracking in Retinal Fundus Videos

Similar documents
Quality Guided Image Denoising for Low-Cost Fundus Imaging

Multi-Frame Super-Resolution with Quality Self-Assessment for Retinal Fundus Videos

Multiscale Blood Vessel Segmentation in Retinal Fundus Images

CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION

Automatic No-Reference Quality Assessment for Retinal Fundus Images Using Vessel Segmentation

Fuzzy C-means Clustering For Retinal Layer Segmentation On High Resolution OCT Images

Multiscale Blood Vessel Segmentation in Retinal Fundus Images

CHAPTER-4 LOCALIZATION AND CONTOUR DETECTION OF OPTIC DISK

2D Vessel Segmentation Using Local Adaptive Contrast Enhancement

Edge-Preserving Denoising for Segmentation in CT-Images

Gradient-Based Differential Approach for Patient Motion Compensation in 2D/3D Overlay

82 REGISTRATION OF RETINOGRAPHIES

Total Variation Regularization Method for 3D Rotational Coronary Angiography

Blood vessel tracking in retinal images

Guided Image Super-Resolution: A New Technique for Photogeometric Super-Resolution in Hybrid 3-D Range Imaging

Total Variation Regularization Method for 3-D Rotational Coronary Angiography

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences

Improvement and Evaluation of a Time-of-Flight-based Patient Positioning System

Tutorial 8. Jun Xu, Teaching Asistant March 30, COMP4134 Biometrics Authentication

High Performance GPU-Based Preprocessing for Time-of-Flight Imaging in Medical Applications

Non-Stationary CT Image Noise Spectrum Analysis

Evaluation of Spectrum Mismatching using Spectrum Binning Approach for Statistical Polychromatic Reconstruction in CT

Critique: Efficient Iris Recognition by Characterizing Key Local Variations

Projection and Reconstruction-Based Noise Filtering Methods in Cone Beam CT

Band-Pass Filter Design by Segmentation in Frequency Domain for Detection of Epithelial Cells in Endomicroscope Images

Gopalakrishna Prabhu.K Department of Biomedical Engineering Manipal Institute of Technology Manipal University, Manipal, India

Respiratory Motion Compensation for C-arm CT Liver Imaging

DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN. Gengjian Xue, Jun Sun, Li Song

Automatic Removal of Externally Attached Fiducial Markers in Cone Beam C-arm CT

Detecting and Identifying Moving Objects in Real-Time

IRIS SEGMENTATION OF NON-IDEAL IMAGES

Iterative CT Reconstruction Using Curvelet-Based Regularization

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation

Robust Computer-Assisted Laser Treatment Using Real-time Retinal Tracking

Retinal Vessel Segmentation Using Gabor Filter and Textons

Nonparametric Clustering of High Dimensional Data

CS 223B Computer Vision Problem Set 3

SECTION 5 IMAGE PROCESSING 2

arxiv: v1 [cs.cv] 10 Feb 2016

A Non-Linear Image Registration Scheme for Real-Time Liver Ultrasound Tracking using Normalized Gradient Fields

Segmentation and Tracking of Partial Planar Templates

Masked Face Detection based on Micro-Texture and Frequency Analysis

AN ADAPTIVE REGION GROWING SEGMENTATION FOR BLOOD VESSEL DETECTION FROM RETINAL IMAGES

Keywords: Thresholding, Morphological operations, Image filtering, Adaptive histogram equalization, Ceramic tile.

Computer-aided Diagnosis of Retinopathy of Prematurity

Blood Particle Trajectories in Phase-Contrast-MRI as Minimal Paths Computed with Anisotropic Fast Marching

ToF/RGB Sensor Fusion for Augmented 3-D Endoscopy using a Fully Automatic Calibration Scheme

Discrete Estimation of Data Completeness for 3D Scan Trajectories with Detector Offset

CS 231A Computer Vision (Fall 2012) Problem Set 3

Spatial-temporal Total Variation Regularization (STTVR) for 4D-CT Reconstruction

Sparse Principal Axes Statistical Deformation Models for Respiration Analysis and Classification

Enhanced Iris Recognition System an Integrated Approach to Person Identification

Separate CT-Reconstruction for Orientation and Position Adaptive Wavelet Denoising

Classification of Subject Motion for Improved Reconstruction of Dynamic Magnetic Resonance Imaging

Part-Based Skew Estimation for Mathematical Expressions

A Review of Methods for Blood Vessel Segmentation in Retinal images

Multi-Person Tracking-by-Detection based on Calibrated Multi-Camera Systems

Generation of Triangle Meshes from Time-of-Flight Data for Surface Registration

Automatic Tracking of Moving Objects in Video for Surveillance Applications

On the main screen top left side enter your name and specify if you are a clinician or not.

CSSE463: Image Recognition Day 21

Human Motion Detection and Tracking for Video Surveillance

CHAPTER 4 OPTIC CUP SEGMENTATION AND QUANTIFICATION OF NEURORETINAL RIM AREA TO DETECT GLAUCOMA

Automated Vessel Shadow Segmentation of Fovea-centred Spectral-domain Images from Multiple OCT Devices

IRIS recognition II. Eduard Bakštein,

EE795: Computer Vision and Intelligent Systems

Scaling Calibration in the ATRACT Algorithm

Probabilistic Tracking and Model-based Segmentation of 3D Tubular Structures

A Simulation Study and Experimental Verification of Hand-Eye-Calibration using Monocular X-Ray

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Segmentation and Localization of Optic Disc using Feature Match and Medial Axis Detection in Retinal Images

Improved Non-Local Means Algorithm Based on Dimensionality Reduction

Future Computer Vision Algorithms for Traffic Sign Recognition Systems

Robust Ring Detection In Phase Correlation Surfaces

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

A Survey of Light Source Detection Methods

Image Segmentation Techniques for Object-Based Coding

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Tracking of Blood Vessels in Retinal Images Using Kalman Filter

Bagging for One-Class Learning

Towards full-body X-ray images

A Quadrature Filter Approach for Registration Accuracy Assessment of Fundus Images

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES

Quantitative Three-Dimensional Imaging of the Posterior Segment with the Heidelberg Retina Tomograph

Logical Templates for Feature Extraction in Fingerprint Images

An Efficient Iris Recognition Using Correlation Method

SIMULINK based Moving Object Detection and Blob Counting Algorithm for Traffic Surveillance

Real-Time Human Detection using Relational Depth Similarity Features

ÇANKAYA UNIVERSITY FACULTY OF ENGINEERING COMPUTER ENGINEERING DEPARMENT CENG 407 SOFTWARE REQUIREMENTS SPECIFICATION (SRS)

Model-based Visual Tracking:

Product information. Hi-Tech Electronics Pte Ltd

Feature Extraction and Image Processing, 2 nd Edition. Contents. Preface

THE OPTIC DISC (OD), which in fundus images usually

A comparison of iris image segmentation techniques

A MAXIMUM NOISE FRACTION TRANSFORM BASED ON A SENSOR NOISE MODEL FOR HYPERSPECTRAL DATA. Naoto Yokoya 1 and Akira Iwasaki 2

The Institute of Telecommunications and Computer Sciences, UTP University of Science and Technology, Bydgoszcz , Poland

3D Guide Wire Navigation from Single Plane Fluoroscopic Images in Abdominal Catheterizations

A Fast and Accurate Eyelids and Eyelashes Detection Approach for Iris Segmentation

Convolution-Based Truncation Correction for C-Arm CT using Scattered Radiation

Topic 4 Image Segmentation

Transcription:

Geometry-Based Optic Disk Tracking in Retinal Fundus Videos Anja Kürten, Thomas Köhler,2, Attila Budai,2, Ralf-Peter Tornow 3, Georg Michelson 2,3, Joachim Hornegger,2 Pattern Recognition Lab, FAU Erlangen-Nürnberg 2 Erlangen Graduate School in Advanced Optical Technologies (SAOT) 3 Department of Ophthalmology, FAU Erlangen-Nürnberg anja-kuerten@web.de Abstract. Fundus video cameras enable the acquisition of image sequences to analyze fast temporal changes on the human retina in a noninvasive manner. In this work, we propose a tracking-by-detection scheme for the optic disk to capture the human eye motion on-line during examination. Our approach exploits the elliptical shape of the optic disk. Therefore, we employ the fast radial symmetry transform for an efficient estimation of the disk center point in successive frames. Large eye movements due to s, motion of the head or blinking are detected automatically by a correlation analysis to guide the tracking procedure. In our experiments on real video data acquired by a low-cost video camera, the proposed method yields a hit rate of 98% with a normalized median accuracy of 4% of the disk diameter. The achieved frame rate of 28 frames per second enables a real-time application of our approach. Introduction Fundus photography is a widely used modality to diagnose diseases such as diabetic retinopathy or glaucoma by detection of anomalies in single 2-D images. Novel fundus video cameras enable the observation of fast temporal changes on the retina for new applications such as the measurement of the time course of the fundus reflection to analyze the cardiac cycle []. During examination, the human eye undergoes natural as well as guided movements. For applications such as patient guidance or the fixation of a certain position on the fundus by a stimulus, eye motion must be captured on-line as feedback in a control system. Tracking algorithms determine the position of an object in successive frames and can be used to capture eye motion. Therefore, the optic disk (OD) visible as bright elliptical spot, represents a robust feature for tracking. While there are numerous approaches for OD localization, OD tracking remains challenging and has been less considered in literature. Difficulties arise due to the decreased image quality in terms of resolution and illumination conditions of common fundus video cameras compared to conventional fundus cameras [2]. Furthermore, tracking has to deal with large eye movements due to s, head motion or blinking during examination. In an early work, Koozekanani et

2 Anja Kürten et al. al. [3] have proposed a method to track the OD using a tiered detection scheme containing Hough transform, eigenimage analysis and geometric analysis. However, this approach is computationally demanding which makes a real-time application cumbersome. In terms of object detection methods, Loy and Zelinsky [4] have introduced the fast radial symmetry transform(frst) as a point of interest detector to highlight points of high radial symmetry. Compared to the circular Hough transform, this method has a low computational complexity and thus it is well suited for real-time applications. Recently, the FRST has been proposed by Budai et al. [5] for OD localization in high-resolution fundus photographs. In this paper, we propose a tracking algorithm that exploits the geometry of the OD to capture its radius and position in successive frames of a fundus image sequence. Therefore, our method employs the FRST in a tracking-bydetection scheme. Large eye motion due to s or head movements are detected automatically by a correlation analysis to guide the tracking procedure. The proposed method enables real-time OD tracking during examination. 2 Materials and Methods 2. Fast Radial Symmetry Transform For every pixel p in the input image I, the position of the center point candidate c(p) is given by c(p) = p + I(p) I(p) 2 r, where I(p) is the gradient of the input image and r is the radius. An orientation projection image O r and a magnitude projection image M r of the same size as I are initialized with zeros. Then, the values are incremented for every center point candidate according to: { O r (c(p)) := O r (c(p))+ if γ I(p) β O r (c(p)) otherwise () { M r (c(p)) := M r (c(p))+ I(p) 2 if γ I(p) β M r (c(p)) otherwise (2) where O r (c(p)) and M r (c(p)) denote the orientation projection and magnitude projection image for center point candidate c(p), β is a gradient threshold parameter to exclude small gradients caused by noise and γ is a gradient threshold parameter to exclude high gradients that are likely to be caused by blood vessels [5]. The projection images are combined to a single map F r (p) according to: F r (p) = M r(p) k r ( Õr(p) k r ) α {, where Õr(p) O r (p) if O r (p) < k r = k r otherwise, where α is the radial strictness parameter and k r is a normalization factor. Finally, the radial symmetry contribution map S r is computed as S r = F r A r, where A r is a two-dimensional Gaussian with standard deviation σ =.25n. Local maximums in S r indicate bright regions of high radial symmetry and are detected for OD localization. (3)

Geometry-Based Optic Disk Tracking 3 2.2 Tracking Algorithm The proposed algorithm is divided into two steps: (i) During the initialization, the OD is localized and its radius is estimated. (ii) For tracking, the position of the OD is updated for every frame. In a preprocessing step for each frame, we apply a Gaussian filter for noise reduction and a dilation with subsequent erosion to suppress blood vessels that influence the detection of circular objects. Radius Estimation. TheradialsymmetrycontributionmapS r (I)iscalculated for a set of radii R = {r,r 2,...} in image I. We search for the maximum in all maps S r (I) and estimate the OD radius ˆr as: ˆr(I) = argmax(max(s r (I))), (4) r R where max(s r (I))) is the maximum in the radial symmetry contribution map S r (I). As the radius estimation in a single image may be affected by noise, we estimate the radius in a number of successive frames during initialization and compute the median radius r over all estimates ˆr. In order to improve efficiency and since the OD size is fixed over time, this radius is used for tracking based on the FRST in all subsequent frames. Position Update. The OD position is detected by computing the FRST map S r (I) for the current frame based on the radius estimate r. Then, the center point is updated by detecting the position of the maximum value max(s r (I)). The efficiency of the position update is improved by computing S r (I) only in a quadratic window centered on the OD position of the previous frame with an edge length of l times the OD diameter. In case of large eye movements due to s, the OD leaves the window and its new position cannot be detected relative to the previous frame. Therefore, we compute the normalized cross-correlation ρ between optic disk windows in successive frames. For ρ < ρ, we detect a and update the OD position using the FRST in the whole image. Additionally, frames affected by blinking are excluded from tracking. We compute the Bhattacharyya coefficient [6] b between the histogram of the first image and the histogram of the current frame to exclude frames with b < b. 2.3 Experiments We evaluated our algorithm on a set of fundus image sequences acquired with a low-cost video camera that was constructed by Dr. Ralf-Peter Tornow. The camera uses LED illumination and a CCD camera (64 x 48 pixels) to acquire video sequences with up to 5 frames per second (fps) and a field of view of 5 without dilating the pupil. We captured image sequences from the left eye of six healthy subjects under two scenarios: (i) The subject fixated on a bright spot, so that the fixation position was stable.(ii) The subject fixated on different points in order to produce s. After three seconds of fixating on the starting point,

4 Anja Kürten et al. Hit rate [%] Mean error [ODD] Median error [ODD] A B A B A B TbD (proposed) 98. 98.9.3 ±.2.2 ±.8.3.4 MST 86.9 88.9.35 ±.43.32 ±.4.2.9 Expert A.3 ±.2.3 Expert B.3 ±.2.3 Table. Hit rate and accuracy measured in OD diameter (ODD) of the proposed tracking-by-detection algorithm (TbD) and mean shift tracking (MST). We also evaluated the inter-observer variance between expert A and expert B. the subject alternated between looking to the right, fixating on the starting point and looking to the left. Each sequence has a duration of 5 seconds and was taken with two different frame rates: 25 and 5fps. Two ophthalmic imaging experts created ground truth data by labeling the OD center in every 5 th frame and measuring the vertical OD diameter in the first frame. The data from one subject was used for parameter adjustment and was excluded from the results. The range of the radii was set to R = {9,95,...,2}. Six frames were used for initialization. The edge length of the OD window was chosen as.25 times the OD diameter. The threshold was set to ρ =.25 and the blink threshold was set to b =.7. For the FRST, the radial strictness parameter was selected to be α = 2 and the gradient thresholds were set to β = and γ = 8. Our method was compared to intensity-based mean shift tracking (MST) [6]. All algorithms were implemented in C++ and runtimes were evaluated on an Intel i3 CPU with 2.4GHz and 4 GB RAM. Robustness was assessed by computing the hit rate for each algorithm. A hit is detected if the estimated OD center is inside the labeled OD. The accuracy of tracking was evaluated by computing the euclidean distance between the detected and the labeled center. 3 Results The accuracy achieved by mean shift tracking and the proposed method itemized for the examined subjects is shown as boxplot in Fig.. The measuring unit is OD diameter (ODD), which is the euclidean error of the center point detection normalized with the corresponding ground truth OD diameter. For both algorithms, tracking was more reliable for video data acquired with a lower frame rate (25 fps) and thus improved conditions in terms of illumination and contrast. Mean shift tracking showed higher errors for sequences. Tab. summarizes the hit rates as well as mean, standard deviation and median of the accuracy over all test sequences. Our algorithm achieved a mean hit rate of 98.%, a mean accuracy of.3 ±.2ODD and a median accuracy of.4odd which is in the range of the median inter-observer variance of.3odd. Mean shift tracking achieved a mean hit rate of 86.9% with a mean accuracy of.35 ±.43ODD and a median accuracy of.2odd. A qualitative comparison be-

Geometry-Based Optic Disk Tracking 5 2 3 4 5 6 2 3 4 5 6 2 3 4 5 6 (a) 25 fps 2 3 4 5 6 (b) 5 fps Fig.. Boxplots of the tracking accuracy for the proposed method (top row) and mean shift tracking (bottom row) normalized by the OD diameter (ODD) for 25fps (a) and 5 fps (b). We evaluated the euclidean distance between the estimated OD center point and the mean of two experts used as ground truth. tween both methods for two different subjects is presented in Fig. 2. In particular, the proposed method was able to detect the OD even in case of s whereas mean shift resulted in erroneous center point detections. In terms of run-time, we achieved a mean frame rate of 28fps. The mean shift approach achieved a frame rate of 2fps. We used the same initialization to estimate the OD radius for all methods, which had a mean runtime of.9 seconds. 4 Discussion We proposed a tracking-by-detection framework to capture the position and the radius of the OD in successive frames of retinal fundus image sequences. The FRST is employed for tracking and exploits the circular shape of the OD. Compared to intensity-based mean shift tracking, the proposed method shows improved robustness with respect to large eye movements due to s. Our proof-of-concept implementation enables a real-time application for 25 fps video data. A GPU implementation holds great potential to speed-up our algorithm.

6 Anja Ku rten et al. (a) Frame 2 (b) Frame 6 (c) Frame 3 (d) Frame 5 (e) Frame 2 (f) Frame 9 (g) Frame 35 (h) Frame 36 Fig. 2. Tracking results for different frames obtained by mean shift tracking (dotted blue line) and the proposed method (magenta line). In our approach, the OD detection and position update may be affected by blood vessels which have a shape and gradient similar to the OD boundary as shown in Fig. 2(g). In this case, the inclusion of vascular information as proposed by [3] could improve the robustness. Future work will focus on a hybrid approach to integrate color and texture information to the proposed algorithm in order to improve tracking reliability. Acknowledgments. The authors gratefully acknowledge funding of the Erlangen Graduate School in Advanced Optical Technologies (SAOT) by the German National Science Foundation (DFG) in the framework of the excellence initiative. References. Tornow RP, Kopp O, Schultheiss B. Time Course of Fundus Reflection Changes According to the Cardiac Cycle. Invest Ophthalmol Vis Sci. 23;44:Abstract 296. 2. Ko hler T, Hornegger J, Mayer M, Michelson G. Quality-Guided Denoising for LowCost Fundus Imaging. In: Proc BVM 22. Springer; 22. p. 292 297. 3. Koozekanani D, Boyer KL, Roberts C. Tracking the optic nervehead in oct video using dual eigenspaces and an adaptive vascular distribution model. IEEE Trans Med Imaging. 23;22(2):59 536. 4. Loy G, Zelinsky A. Fast radial symmetry for detecting points of interest. IEEE Trans Pattern Anal Mach Intell. 23;25(8):959 973. 5. Budai A, Aichert A, Vymazal B, Hornegger J, Michelson G. Optic disk localization using fast radial symmetry transform. In: IEEE 26th International Symposium on Computer-Based Medical Systems (CBMS); 23. p. 59 64. 6. Comaniciu D, Ramesh V, Meer P. Kernel-based object tracking. IEEE Trans Pattern Anal Mach Intell. 23;25(5):564 577.