New Hough Transform-based Algorithm for Detecting L-shaped Linear Structures

Similar documents
HOUGH TRANSFORM CS 6350 C V

Pattern recognition systems Lab 3 Hough Transform for line detection

HOUGH TRANSFORM FOR INTERIOR ORIENTATION IN DIGITAL PHOTOGRAMMETRY

An Extension to Hough Transform Based on Gradient Orientation

Model Fitting: The Hough transform I

Straight Lines and Hough

Robust Ring Detection In Phase Correlation Surfaces

Computer and Machine Vision

A Statistical Method for Peak Localization in Hough Space by Analysing Butterflies

Semi-Automatic Detection of Cervical Vertebrae in X-ray Images Using Generalized Hough Transform

Fitting: The Hough transform

Perception IV: Place Recognition, Line Extraction

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Fitting: The Hough transform

Fitting: The Hough transform

GENERALIZING THE HOUGH TRANSFORM TO DETECT ARBITRARY SHAPES. D. H. Ballard Pattern Recognition Vol. 13 No

Coarse-to-Fine Search Technique to Detect Circles in Images

FPGA Implementation of a Memory-Efficient Hough Parameter Space for the Detection of Lines

Edge Detection. EE/CSE 576 Linda Shapiro

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10

EECS 442 Computer vision. Fitting methods

E0005E - Industrial Image Analysis

Lecture 15: Segmentation (Edge Based, Hough Transform)

A Survey of Light Source Detection Methods

Non-analytic object recognition using the Hough transform with the matching technique

An Efficient Randomized Algorithm for Detecting Circles

Model Fitting. Introduction to Computer Vision CSE 152 Lecture 11

Distance and Angles Effect in Hough Transform for line detection

FAST RANDOMIZED ALGORITHM FOR CIRCLE DETECTION BY EFFICIENT SAMPLING

Rectangle Detection based on a Windowed Hough Transform

Object Detection from Complex Background Image Using Circular Hough Transform

Fitting. Lecture 8. Cristian Sminchisescu. Slide credits: K. Grauman, S. Seitz, S. Lazebnik, D. Forsyth, J. Ponce

10/03/11. Model Fitting. Computer Vision CS 143, Brown. James Hays. Slides from Silvio Savarese, Svetlana Lazebnik, and Derek Hoiem

HOUGH TRANSFORM. Plan for today. Introduction to HT. An image with linear structures. INF 4300 Digital Image Analysis

Lecture 9 Fitting and Matching

A split-and-merge framework for 2D shape summarization

An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy

Appearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization

ELLIPSE DETECTION USING SAMPLING CONSTRAINTS. Yi Tang and Sargur N. Srihari

A Real-Time Ellipse Detection Based on Edge Grouping

Edges and Lines Readings: Chapter 10: better edge detectors line finding circle finding

Skew Detection and Correction of Document Image using Hough Transform Method

OBJECT detection in general has many applications

Instance-level recognition

Issues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They can be performed sequentially or simultaneou

Detecting Elliptic Objects Using Inverse. Hough{Transform. Joachim Hornegger, Dietrich W. R. Paulus. The following paper will appear in the

Hough Transform and RANSAC

Detecting square-shaped objects using the Hough transform

Fitting. Instructor: Jason Corso (jjcorso)! web.eecs.umich.edu/~jjcorso/t/598f14!! EECS Fall 2014! Foundations of Computer Vision!

Homography estimation

Lecture 8 Fitting and Matching

Improving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries

Feature Matching and Robust Fitting

Chapter 11 Arc Extraction and Segmentation

Finding 2D Shapes and the Hough Transform

Part-Based Skew Estimation for Mathematical Expressions

Lecture 8: Fitting. Tuesday, Sept 25

Edge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick

MORPHOLOGICAL EDGE DETECTION AND CORNER DETECTION ALGORITHM USING CHAIN-ENCODING

Edge Detection. Announcements. Edge detection. Origin of Edges. Mailing list: you should have received messages

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

A Robust Wipe Detection Algorithm

RESTORATION OF DEGRADED DOCUMENTS USING IMAGE BINARIZATION TECHNIQUE

Object detection using non-redundant local Binary Patterns

Edges and Lines Readings: Chapter 10: better edge detectors line finding circle finding

Small-scale objects extraction in digital images

A Miniature-Based Image Retrieval System

On a fast discrete straight line segment detection

Lecture 9: Hough Transform and Thresholding base Segmentation

Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds

Object Classification Using Tripod Operators

AN APPROACH FOR GENERIC DETECTION OF CONIC FORM

Object Shape Recognition in Image for Machine Vision Application

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han

Genetic Fourier Descriptor for the Detection of Rotational Symmetry

Restoring Chinese Documents Images Based on Text Boundary Lines

Instance-level recognition

Perceptual Quality Improvement of Stereoscopic Images

IOSR Journal of Electronics and communication Engineering (IOSR-JCE) ISSN: , ISBN: , PP:

ROTATION INVARIANT TRANSFORMS IN TEXTURE FEATURE EXTRACTION

Estimation of Multiple Illuminants from a Single Image of Arbitrary Known Geometry*

Motion Detection. Final project by. Neta Sokolovsky

CS 231A Computer Vision (Winter 2014) Problem Set 3

Skeletonization Algorithm for Numeral Patterns

A window-based inverse Hough transform

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Detecting Multiple Symmetries with Extended SIFT

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

An Edge-Based Approach to Motion Detection*

CHAPTER 1 INTRODUCTION

Edge linking. Two types of approaches. This process needs to be able to bridge gaps in detected edges due to the reason mentioned above

Hidden Loop Recovery for Handwriting Recognition

A new gray level based Hough transform for region extraction: An application to IRS images 1

arxiv: v1 [cs.cv] 15 Nov 2018

Pixels. Orientation π. θ π/2 φ. x (i) A (i, j) height. (x, y) y(j)

Video shot segmentation using late fusion technique

EE795: Computer Vision and Intelligent Systems

Content Based Image Retrieval: Survey and Comparison between RGB and HSV model

Short Survey on Static Hand Gesture Recognition

Transcription:

New Hough Transform-based Algorithm for Detecting L-shaped Linear Structures Ronald Ngatuni 1, Jong Kwan Lee 1,, Luke West 1, and Eric S. Mandell 2 1 Dept. of Computer Science, Bowling Green State Univ., Bowling Green, OH 43403, U.S.A. 2 Dept. of Physics & Astronomy, Bowling Green State Univ., Bowling Green, OH 43403, U.S.A. Abstract In this paper, we present an early attempt for automatic detection of L-shaped linear structures. In particular, a new Hough Transform (HT)-based algorithm that enables robust detection of L-shaped carbon nanocone structures in Transmission Electronic Microscopy (TEM) images is described. The algorithm introduces a new parameter space in the Hough Transform processing for automatic detection of the L-shaped structures. Effectiveness of the algorithm is evaluated using various types of images. Keywords: Hough Transform, Automated Segmentation, Image Processing 1. Introduction and Background Hough Transform (HT) [5] and its extensions (e.g., [3], [1], [7], [4]) have been used widely in many applications that require automated structure segmentation. For example, the standard HT has been employed for line detection in sports video [16]. Automated face detection is another application area where HT variants have often been applied to [17], [12]. Many remote sensing applications have also used HT extensions (e.g., [2]) for the detection of scientifically interesting features. In this paper, we introduce a new HT that enables detection of L-shaped structures in Transmission Electronic Microscopy (TEM) imagery. Specifically, our target L-shaped structures are the carbon nanocones. Carbon nanocones are conical structures which are made predominantly from carbon. Carbon nanocones have a certain orientation under which their project mass thickness allows for the level of contrast necessary to stand out against an essential amorphous carbon background [11]. The carbon nanocones appear as two linear structures joined together in TEM images. These structures are being used in many fields, including nanocomputing and biosensors. In this paper, we focus on the detection of carbon nanocones whose linear structures form an angle of approximately 110, 140, 113, or 150. (These special types of nanocones that are studied by many physicists.) An example of a real TEM image with several carbon nanocones is shown in Figure 1. In the figure, sample carbon nanocones (i.e., dark L-shaped linear structures) are indicated by red arrows for the readers. As Corresponding Author, Email: leej@bgsu.edu shown in the figure, the carbon nanocones have low contrast and there are also other non-nanocone features with similar intensity characteristics within the images. In addition, the structures appear in different orientations; thus, it is very challenging to detect these structures automatically. Current scientific studies of the carbon nanocones rely on manual feature extraction. Manual extraction can be very tedious and often does not provide singular solutions (e.g., different people produce different results). A robust automated carbon nanocone detection algorithm is useful by providing scientists a method to automatically search for these structures in an efficient and consistent manner. 2. Related Work In this section, the HT and its key variants are discussed. The standard HT method enables detection of global patterns (that can typically be expressed using an analytic equation) in an image space by examination of local patterns in a transformed parameter space. In HT, each edge point in the image space is mapped to multiple bins in the parameter space. Then the parameters of the pattern are found by taking the parameters associated with the bin containing the highest bin count. An example of a standard HT for line detection (using Y = mx + c line equation) is shown in Figure 2. In the figure, five edge points on the same line and one noise point is mapped to the parameter space. Then, the parameters (i.e., m and c) are found where the lines intersect Fig. 1: Example of Carbon Nanocones in a Real TEM Image

in the parameter space (i.e., the bin with the highest count). Here, we note that since m cannot express a vertical line, the normal form of a line equation is often used in HT for line detection. The standard HT, however, has high memory and computational requirements that are dependent on the input image size and the total number of edge points. Some of the key HT variants include the Randomized Hough Transform (RHT) [15] and the Generalized Hough Transform (GHT) [1]. RHT-based approaches [15], [10], [2] alleviate the high memory and computational requirements of the standard HT. Instead of considering all edge points in the HT s binning processing, RHT randomly chooses n- tuples of edge pixels, where n is the minimum number of points to define the feature of interest analytically (e.g., n = 3 for a circle), and maps each tuple to one bin in the parameter space. This process is repeated until enough tuples have been mapped in the parameter space. Thus, RHT is less dependent on the image size and the total number of edge points. GHT allows detection of both analytic and nonanalytic (i.e., arbitrarily-shaped) features. In GHT, a feature model based on a reference point (i.e., a key point on the feature) is used in its binning process. Specifically, for each edge point, the angle between the gradient direction at the edge point and the direction from the edge point to the reference point, and the distance from the edge point to the reference point are mapped in the parameter space using a lookup table. Then, the reference point is recovered from the bin with the highest vote count and the feature can be recovered by using the model. The standard HT and its variants (e.g., RHT and GHT) are not applicable (or very challenging) to the carbon nanocone structure detection since there is no analytic expression of the structures for HT s processing and L-shaped features whose linear structures form an angle that is within the specified set have to considered at the same time. For example, the angle used in GHT (as mentioned above) cannot be appropriately defined since there is only two linear structures; the angles will be the same for all the collinear edge points. There have also been attempts at performing real-time HT using high parallel processing capability made possible Y Y=mX+c (X, Y) X Fig. 2: Standard Hough Transform for Line Detection (borrowed from [8]) c 01 01 m by graphics processing units (GPUs) (e.g., [13], [14], [9]). However, these are not considered here since we mainly focus on the development of a new algorithm that can accurately detect the L-shaped nanocone structures. 3. New Hough Transform for L-shaped Structures Next, we describe our new Hough transform method for detecting the L-shaped carbon nanocone structures. We will call this new HT method the L-shaped Hough transform (LHT ) in this paper. The LHT proceeds in a similar way as the GHT; it performs its HT binning processing based on a model. However, it introduces a new HT parameter space that exploits two key characteristics of the carbon nanocone structures. In particular, we utilize the angle formed by the joined linear structures and the distances between the edge points that are from different linear structures. (Details will be discussed later in this section.) Here, we note again that our LHT focuses on detection of the carbon nanocone structures whose angle between their linear structures is approximately 110, 140, 113, or 150. 3.1 L-shaped Carbon Nanocone Model As shown in Figure 1, the carbon nanocone structures are L-shaped. Thus, we build a L-shaped model and employ its characteristics in the LHT s binning process. Figure 3 shows our model for the carbon nanocone structure. In Figure 3 (a), the joining edge point (we call this point the reference point), P ref, edge points, P 1 and P 2, that are from each linear structure, and the nanocone s orientation, θ are indicated on the L-shaped model. The reference point and the orientation are the parameters we use in the LHT s binning process since they are the key characteristics of the model. In Figure 3 (b), edge points from each linear structure are shown. These edge points have the same distance from the reference point. The distance, d, from the reference point to the midpoint (P m ) of P 1 and P 2, is also indicated in the figure. This distance is used to find potential positions of the reference points in LHT s binning process. We note that there are two potential reference points for a set of two edge points that have the same distance from the reference point; one potential reference point will be the reflection of the other potential reference point with respect to the line connecting P 1 and P 2. A set of two edge points (from each linear structure) can have different distances from the reference point as shown in Figure 3 (c). For this case, there are more than two potential reference points and they can be determined by finding a corresponding edge point (i.e., edge point that have the same distance from the reference point, P 2 in Figure 3 (c)) and using the midpoint similar to Case 1. We consider all the potential reference points in the LHT s binning process.

(a) (b) (c) Fig. 3: L-shaped Carbon Nanocone Model: (a) L-shaped model (reference point, P ref, two edge points, P 1 and P 2, and orientation, θ), (b) Case 1: P 1 and P 2 have the same distance from P ref, and (c) Case 2: P 1 and P 2 have different distances from P ref 3.2 Preprocessing Our LHT-based carbon nanocone structure detection includes simple preprocessing steps to remove non-nanocone structures (e.g., background features) and to generate the binary image (i.e., edge point image). First, a global thresholding is applied to remove most of the non-nanocone features. All pixels whose intensity is greater than a threshold value T are considered to be non-nanocone features (as the carbon nanocones have much lower intensity than other features in TEM images). We have empirically found that a value of 0.28 is a reasonable threshold value for the TEM images with their intensity ranging in 0.0 to 1.0. Then, we apply a thinning algorithm to produce a more compact representation of the nanocone structures. We have used the fast thinning algorithm by Zhang and Suen [18]. 3.3 New Method: L-shaped Hough Transform (LHT) The new LHT performs its HT binning process to recover the position of the reference point and the orientation. All possible combinations of edge point pairs are considered to determine potential reference points and the structure orientation. Then, they are applied to LHT s binning processing; they are used as the indices to increment the bin count in a 3D accumulator array for the x-, y-coordinates of the potential reference points and the orientation. The first step of the LHT is the recovery of potential reference points. Potential reference points can be recovered using the distance from the midpoint of two edge points (each from different linear structure) to the reference point. This step is done in a faster way using two pre-defined lookup tables. In particular, one lookup table is used for Fig. 4: Edge Point Pairs with the Same Distance a pair of edge points that have the same distance from the reference point (i.e., Case 1 shown in Figure 3 (b)). Another table is used for a pair of edge points that have different distances from the reference point (i.e., Case 2 shown in Figure 3 (c)). These lookup tables are indexed by the distance between the edge points. As mentioned earlier, since the only carbon nanocones to be considered have their linear structures separated by approximately 110, 140, 113, or 150, we can pre-determine the positions of the potential reference points using the distance from the midpoint to the reference point and the direction that is perpendicular to the line connecting P 1 and P 2 from the midpoint. For Case 1 of the edge point pair, there are two potential edge points. For Case 2 of the edge point pair, there are more than two potential edge points since there can be more than one set of edge points with the same distance (as shown in Figure 4). Next, the orientation of the nanocone structure is recovered. For each potential reference point, we determine the orientation of the nanocone by using the potential reference point coordinates, P ref =(P refx, P refy ) and the midpoint coordinates, P m =(P mx, P my ) in Equation 1: θ = arctan P my P refy P mx P refx. (1) Once the orientation is determined, the bin indexed by the reference point s coordinates and the orientation is incremented by one. This binning step is repeated for all pairs of the edge points and then the parameters of the structures are recovered by finding high bin counts in the 3D accumulator array. Here, we note that we merge the bins with high bin counts when the bins are very close to each other. 4. Experimental Results The LHT s effectiveness has been benchmarked using over 1,000 synthetic images and several simulated carbon nanocone images. The tested images were of size 256 256. The synthetic image testing considered images of one to four L-shaped linear structures with six different levels of random background noise. We considered 0%, 1%, 2%, 3%, 4%, and 5% background noise. A sample set of noise images (with one L-shaped linear structure) is shown in Figure 5.

(a) No Noise (b) 1 % Noise (c) 2 % Noise (d) 3 % Noise (e) 4 % Noise (f) 5 % Noise Fig. 5: Synthetic Images with Different Noise Levels The benchmarking on the synthetic images considered the maximum, the minimum, the mean, and the standard deviation of the errors in the reference point position and the orientation. Tables 1 and 2 summarize the benchmarking results on the synthetic images. As shown in tables, the LHT recovered the L-shaped features very accurately; the averages of the reference point position errors and the orientation errors were all less than 0.5 for all noise levels. (Figure 6 shows two sample LHT results.) For all L-shaped linear structures on over 1,000 synthetic images, 96% of the structures were recovered. However, the LHT produced some false positive errors (e.g., for the 5% noise images, the false positive errors were up to about 15%). The simulated carbon nanocone image testing considered applying all LHT steps including the preprocessing steps. The simulated images were generated by using TEM image simulation presented in [6]. Figure 7 shows a sample result of a simulated image. Figure 7 (a) shows a simulated image with one nanocone structure. Figure 7 (b) shows the preprocessed image of the simulated image shown in (a). Figure 7 (c) shows the LHT detection result. As shown in the figure, LHT recovered the structure reasonably. 5. Conclusion and Discussion We have presented a new HT-based method for detecting L-shaped carbon nanocone structures that is still a work in progress. The L-shaped Hough Transform (LHT) utilizes two key characteristics of the L-shaped model in defining a new parameter space in the HT s binning process. Through Table 1: Reference Point Position Error (in pixels) Noise Max. Min. Avg. Std. Dev. 0 % 1.31 0.00 0.44 0.15 1 % 1.35 0.00 0.45 0.15 2 % 1.42 0.00 0.45 0.15 3 % 1.43 0.00 0.43 0.15 4 % 1.91 0.00 0.42 0.16 5 % 1.42 0.01 0.45 0.16 Table 2: Orientation Error (in degrees) Noise Max. Min. Avg. Std. Dev. 0 % 1.66 0.00 0.41 0.15 1 % 1.38 0.00 0.37 0.14 2 % 1.52 0.00 0.38 0.16 3 % 1.54 0.00 0.36 0.16 4 % 1.67 0.00 0.37 0.15 5 % 1.64 0.01 0.40 0.17

(a) (a) (b) (b) Fig. 6: LHT Detection Results on Synthetic Images with 2% Noise: (a) 1 structure and (b) 2 structures evaluation of the method, we have shown that the method can provide consistent and reasonable automated detections of L-shaped structures in synthetic and simulated carbon nanocone images. Here, we note that the LHT also produces promising detection results in our preliminary testings on real TEM images. However, our current version of LHT has a few disadvantages. One is that it has very high computational requirement (i.e., time consuming) since it considers all possible combination of edge point pairs. A LHT using GPU processing may be able to alleviate this disadvantage, though. Another problem is that it produces some false positives. We currently are exploiting different preprocessing and post-processing steps to reduce the false positives in effective ways. We also note that extension of LHT to other scientificallyinteresting structures (e.g., nanotubes, buckyballs, and other fullerenes) might be possible. References [1] D. Ballard, Generalized Hough Transform to Detect Arbitary Patterns," IEEE Trans. of Pattern Analysis and Machine Intelligence, Vol. 13 (2), pp. 111 122, 1981. [2] C. Cao, T.S. Newman, and G.A. Germany, New Shape-based Auroral Oval Segmentation Driven by LLS-RHT," Pattern Recognition, Vol. 42 (5), pp. 607 618, 2009. (c) Fig. 7: Result on a Sample of Simulated Carbon Nanocone Image: (a) Simulated Image, (b) Preprocessed Image, and (c) LHT Detection Result [3] R.O. Duda and P.E. Hart, Use of the Hough Transformation to Detect Lines and Curves in Pictures," Communications of the ACM, Vol. 15 (1), pp. 11 15, 1972. [4] S. Hawley, Application of Sparse Sampling to Accelerate the Hough Transform," Proc., 2008 Int l Conf. on Image Processing, Computer Vision, & Pattern Recognition (IPCV 08), pp. 643 647, Las Vegas, July, 2008. [5] P.V.C. Hough, Method and Means for Recognizing Complex Patterns," U.S. Patent, 3,069,654, 1962. [6] E.J. Kirkland, Advanced Computing in Electron Microscopy, Plenum Press, 1988. [7] P. Kultanen, L. Xu, and E. Oja, Randomized Hough Transform RHT," Proc., 10th Int l Conf. on Pattern Recognition, pp. 631 635, Atlantic City, June, 1990. [8] J.K. Lee and M.L. Randles, Efficient Ellipse Detection using GPUbased Linear Least Squares-based Randomized Hough Transform,"

Proc., 2010 Int l Conf. on Image Processing, Computer Vision, and Pattern Recognition (IPCV 10), pp. 714 719, Las Vegas, July, 2010. [9] J.K. Lee, B.A. Wood, and T.S. Newman, Very Fast Ellipse Detection using GPU-based RHT," Proc., 19th Int l Conf. on Pattern Recognition, pp. 1 4, Tampa, Florida, December, 2008. [10] R.A. McLaughlin, Randomized Hough Tranform: Improved Ellipse Detection with Comparison," Pattern Recognition Letters, Vol. 19 (3 4), pp. 299 305, 1998. [11] E.S. Mandell, Electron Beam Characterization of Carbon Nanostructures, Ph.D. Dissertation at the University of Missouri-Rolla, 2008. [12] A. Pietrowcew, Face Detection in Colour Images using Fuzzy Hough Transform," Opto-Electronics Review, Vol. 11 (3), pp. 247 251, 2003. [13] R. Strzodka, I. Ihrke, and M. Magnor, A Graphics Hardware Implementation of the Generalized Hough for Fast Object Recognition, Scale, and 3D Pose Detection," Proc., Int l Conf. on Image Analysis and Processing 03, pp. 188 193, Barcelona, September, 2003. [14] M. Ujaldon, A. Ruiz, and N. Guil, On the Computation of the Circle Hough Transform by a GPU Rasterizer," Pattern Recognition Letters, Vol. 29 (3), pp. 309 318, 2007. [15] L. Xu, E. Oja, and P. Kultanen, A New Curve Detection Method: Randomized Hough Transform (RHT)," Pattern Recognition Letters, Vol. 11 (5), pp. 331 338, 1990. [16] X. Yu, H.C. Lai, S.X.F. Liu, and H.W. Leong, A Gridding Hough Transform for Detecting the Straight Lines in Sports Video," Proc., IEEE Int l Conf. on Multimedia and Expo, pp. 1 4, Amsterdam, July, 2005. [17] S. Zhang and Z. Liu, A Robust, Real-Time Ellipse Detector," Pattern Recognition, Vol. 38 (2), pp. 273 287, 2005. [18] T.Y. Zhang and C.Y. Suen, A Fast Parallel Algorithm for Thinning Digital Patterns, Communications of the ACM, Vol. 27 (3), pp. 236 239, 1984.