New Hough Transform-based Algorithm for Detecting L-shaped Linear Structures
|
|
- Phebe Webster
- 5 years ago
- Views:
Transcription
1 New Hough Transform-based Algorithm for Detecting L-shaped Linear Structures Ronald Ngatuni 1, Jong Kwan Lee 1,, Luke West 1, and Eric S. Mandell 2 1 Dept. of Computer Science, Bowling Green State Univ., Bowling Green, OH 43403, U.S.A. 2 Dept. of Physics & Astronomy, Bowling Green State Univ., Bowling Green, OH 43403, U.S.A. Abstract In this paper, we present an early attempt for automatic detection of L-shaped linear structures. In particular, a new Hough Transform (HT)-based algorithm that enables robust detection of L-shaped carbon nanocone structures in Transmission Electronic Microscopy (TEM) images is described. The algorithm introduces a new parameter space in the Hough Transform processing for automatic detection of the L-shaped structures. Effectiveness of the algorithm is evaluated using various types of images. Keywords: Hough Transform, Automated Segmentation, Image Processing 1. Introduction and Background Hough Transform (HT) [5] and its extensions (e.g., [3], [1], [7], [4]) have been used widely in many applications that require automated structure segmentation. For example, the standard HT has been employed for line detection in sports video [16]. Automated face detection is another application area where HT variants have often been applied to [17], [12]. Many remote sensing applications have also used HT extensions (e.g., [2]) for the detection of scientifically interesting features. In this paper, we introduce a new HT that enables detection of L-shaped structures in Transmission Electronic Microscopy (TEM) imagery. Specifically, our target L-shaped structures are the carbon nanocones. Carbon nanocones are conical structures which are made predominantly from carbon. Carbon nanocones have a certain orientation under which their project mass thickness allows for the level of contrast necessary to stand out against an essential amorphous carbon background [11]. The carbon nanocones appear as two linear structures joined together in TEM images. These structures are being used in many fields, including nanocomputing and biosensors. In this paper, we focus on the detection of carbon nanocones whose linear structures form an angle of approximately 110, 140, 113, or 150. (These special types of nanocones that are studied by many physicists.) An example of a real TEM image with several carbon nanocones is shown in Figure 1. In the figure, sample carbon nanocones (i.e., dark L-shaped linear structures) are indicated by red arrows for the readers. As Corresponding Author, leej@bgsu.edu shown in the figure, the carbon nanocones have low contrast and there are also other non-nanocone features with similar intensity characteristics within the images. In addition, the structures appear in different orientations; thus, it is very challenging to detect these structures automatically. Current scientific studies of the carbon nanocones rely on manual feature extraction. Manual extraction can be very tedious and often does not provide singular solutions (e.g., different people produce different results). A robust automated carbon nanocone detection algorithm is useful by providing scientists a method to automatically search for these structures in an efficient and consistent manner. 2. Related Work In this section, the HT and its key variants are discussed. The standard HT method enables detection of global patterns (that can typically be expressed using an analytic equation) in an image space by examination of local patterns in a transformed parameter space. In HT, each edge point in the image space is mapped to multiple bins in the parameter space. Then the parameters of the pattern are found by taking the parameters associated with the bin containing the highest bin count. An example of a standard HT for line detection (using Y = mx + c line equation) is shown in Figure 2. In the figure, five edge points on the same line and one noise point is mapped to the parameter space. Then, the parameters (i.e., m and c) are found where the lines intersect Fig. 1: Example of Carbon Nanocones in a Real TEM Image
2 in the parameter space (i.e., the bin with the highest count). Here, we note that since m cannot express a vertical line, the normal form of a line equation is often used in HT for line detection. The standard HT, however, has high memory and computational requirements that are dependent on the input image size and the total number of edge points. Some of the key HT variants include the Randomized Hough Transform (RHT) [15] and the Generalized Hough Transform (GHT) [1]. RHT-based approaches [15], [10], [2] alleviate the high memory and computational requirements of the standard HT. Instead of considering all edge points in the HT s binning processing, RHT randomly chooses n- tuples of edge pixels, where n is the minimum number of points to define the feature of interest analytically (e.g., n = 3 for a circle), and maps each tuple to one bin in the parameter space. This process is repeated until enough tuples have been mapped in the parameter space. Thus, RHT is less dependent on the image size and the total number of edge points. GHT allows detection of both analytic and nonanalytic (i.e., arbitrarily-shaped) features. In GHT, a feature model based on a reference point (i.e., a key point on the feature) is used in its binning process. Specifically, for each edge point, the angle between the gradient direction at the edge point and the direction from the edge point to the reference point, and the distance from the edge point to the reference point are mapped in the parameter space using a lookup table. Then, the reference point is recovered from the bin with the highest vote count and the feature can be recovered by using the model. The standard HT and its variants (e.g., RHT and GHT) are not applicable (or very challenging) to the carbon nanocone structure detection since there is no analytic expression of the structures for HT s processing and L-shaped features whose linear structures form an angle that is within the specified set have to considered at the same time. For example, the angle used in GHT (as mentioned above) cannot be appropriately defined since there is only two linear structures; the angles will be the same for all the collinear edge points. There have also been attempts at performing real-time HT using high parallel processing capability made possible Y Y=mX+c (X, Y) X Fig. 2: Standard Hough Transform for Line Detection (borrowed from [8]) c m by graphics processing units (GPUs) (e.g., [13], [14], [9]). However, these are not considered here since we mainly focus on the development of a new algorithm that can accurately detect the L-shaped nanocone structures. 3. New Hough Transform for L-shaped Structures Next, we describe our new Hough transform method for detecting the L-shaped carbon nanocone structures. We will call this new HT method the L-shaped Hough transform (LHT ) in this paper. The LHT proceeds in a similar way as the GHT; it performs its HT binning processing based on a model. However, it introduces a new HT parameter space that exploits two key characteristics of the carbon nanocone structures. In particular, we utilize the angle formed by the joined linear structures and the distances between the edge points that are from different linear structures. (Details will be discussed later in this section.) Here, we note again that our LHT focuses on detection of the carbon nanocone structures whose angle between their linear structures is approximately 110, 140, 113, or L-shaped Carbon Nanocone Model As shown in Figure 1, the carbon nanocone structures are L-shaped. Thus, we build a L-shaped model and employ its characteristics in the LHT s binning process. Figure 3 shows our model for the carbon nanocone structure. In Figure 3 (a), the joining edge point (we call this point the reference point), P ref, edge points, P 1 and P 2, that are from each linear structure, and the nanocone s orientation, θ are indicated on the L-shaped model. The reference point and the orientation are the parameters we use in the LHT s binning process since they are the key characteristics of the model. In Figure 3 (b), edge points from each linear structure are shown. These edge points have the same distance from the reference point. The distance, d, from the reference point to the midpoint (P m ) of P 1 and P 2, is also indicated in the figure. This distance is used to find potential positions of the reference points in LHT s binning process. We note that there are two potential reference points for a set of two edge points that have the same distance from the reference point; one potential reference point will be the reflection of the other potential reference point with respect to the line connecting P 1 and P 2. A set of two edge points (from each linear structure) can have different distances from the reference point as shown in Figure 3 (c). For this case, there are more than two potential reference points and they can be determined by finding a corresponding edge point (i.e., edge point that have the same distance from the reference point, P 2 in Figure 3 (c)) and using the midpoint similar to Case 1. We consider all the potential reference points in the LHT s binning process.
3 (a) (b) (c) Fig. 3: L-shaped Carbon Nanocone Model: (a) L-shaped model (reference point, P ref, two edge points, P 1 and P 2, and orientation, θ), (b) Case 1: P 1 and P 2 have the same distance from P ref, and (c) Case 2: P 1 and P 2 have different distances from P ref 3.2 Preprocessing Our LHT-based carbon nanocone structure detection includes simple preprocessing steps to remove non-nanocone structures (e.g., background features) and to generate the binary image (i.e., edge point image). First, a global thresholding is applied to remove most of the non-nanocone features. All pixels whose intensity is greater than a threshold value T are considered to be non-nanocone features (as the carbon nanocones have much lower intensity than other features in TEM images). We have empirically found that a value of 0.28 is a reasonable threshold value for the TEM images with their intensity ranging in 0.0 to 1.0. Then, we apply a thinning algorithm to produce a more compact representation of the nanocone structures. We have used the fast thinning algorithm by Zhang and Suen [18]. 3.3 New Method: L-shaped Hough Transform (LHT) The new LHT performs its HT binning process to recover the position of the reference point and the orientation. All possible combinations of edge point pairs are considered to determine potential reference points and the structure orientation. Then, they are applied to LHT s binning processing; they are used as the indices to increment the bin count in a 3D accumulator array for the x-, y-coordinates of the potential reference points and the orientation. The first step of the LHT is the recovery of potential reference points. Potential reference points can be recovered using the distance from the midpoint of two edge points (each from different linear structure) to the reference point. This step is done in a faster way using two pre-defined lookup tables. In particular, one lookup table is used for Fig. 4: Edge Point Pairs with the Same Distance a pair of edge points that have the same distance from the reference point (i.e., Case 1 shown in Figure 3 (b)). Another table is used for a pair of edge points that have different distances from the reference point (i.e., Case 2 shown in Figure 3 (c)). These lookup tables are indexed by the distance between the edge points. As mentioned earlier, since the only carbon nanocones to be considered have their linear structures separated by approximately 110, 140, 113, or 150, we can pre-determine the positions of the potential reference points using the distance from the midpoint to the reference point and the direction that is perpendicular to the line connecting P 1 and P 2 from the midpoint. For Case 1 of the edge point pair, there are two potential edge points. For Case 2 of the edge point pair, there are more than two potential edge points since there can be more than one set of edge points with the same distance (as shown in Figure 4). Next, the orientation of the nanocone structure is recovered. For each potential reference point, we determine the orientation of the nanocone by using the potential reference point coordinates, P ref =(P refx, P refy ) and the midpoint coordinates, P m =(P mx, P my ) in Equation 1: θ = arctan P my P refy P mx P refx. (1) Once the orientation is determined, the bin indexed by the reference point s coordinates and the orientation is incremented by one. This binning step is repeated for all pairs of the edge points and then the parameters of the structures are recovered by finding high bin counts in the 3D accumulator array. Here, we note that we merge the bins with high bin counts when the bins are very close to each other. 4. Experimental Results The LHT s effectiveness has been benchmarked using over 1,000 synthetic images and several simulated carbon nanocone images. The tested images were of size The synthetic image testing considered images of one to four L-shaped linear structures with six different levels of random background noise. We considered 0%, 1%, 2%, 3%, 4%, and 5% background noise. A sample set of noise images (with one L-shaped linear structure) is shown in Figure 5.
4 (a) No Noise (b) 1 % Noise (c) 2 % Noise (d) 3 % Noise (e) 4 % Noise (f) 5 % Noise Fig. 5: Synthetic Images with Different Noise Levels The benchmarking on the synthetic images considered the maximum, the minimum, the mean, and the standard deviation of the errors in the reference point position and the orientation. Tables 1 and 2 summarize the benchmarking results on the synthetic images. As shown in tables, the LHT recovered the L-shaped features very accurately; the averages of the reference point position errors and the orientation errors were all less than 0.5 for all noise levels. (Figure 6 shows two sample LHT results.) For all L-shaped linear structures on over 1,000 synthetic images, 96% of the structures were recovered. However, the LHT produced some false positive errors (e.g., for the 5% noise images, the false positive errors were up to about 15%). The simulated carbon nanocone image testing considered applying all LHT steps including the preprocessing steps. The simulated images were generated by using TEM image simulation presented in [6]. Figure 7 shows a sample result of a simulated image. Figure 7 (a) shows a simulated image with one nanocone structure. Figure 7 (b) shows the preprocessed image of the simulated image shown in (a). Figure 7 (c) shows the LHT detection result. As shown in the figure, LHT recovered the structure reasonably. 5. Conclusion and Discussion We have presented a new HT-based method for detecting L-shaped carbon nanocone structures that is still a work in progress. The L-shaped Hough Transform (LHT) utilizes two key characteristics of the L-shaped model in defining a new parameter space in the HT s binning process. Through Table 1: Reference Point Position Error (in pixels) Noise Max. Min. Avg. Std. Dev. 0 % % % % % % Table 2: Orientation Error (in degrees) Noise Max. Min. Avg. Std. Dev. 0 % % % % % %
5 (a) (a) (b) (b) Fig. 6: LHT Detection Results on Synthetic Images with 2% Noise: (a) 1 structure and (b) 2 structures evaluation of the method, we have shown that the method can provide consistent and reasonable automated detections of L-shaped structures in synthetic and simulated carbon nanocone images. Here, we note that the LHT also produces promising detection results in our preliminary testings on real TEM images. However, our current version of LHT has a few disadvantages. One is that it has very high computational requirement (i.e., time consuming) since it considers all possible combination of edge point pairs. A LHT using GPU processing may be able to alleviate this disadvantage, though. Another problem is that it produces some false positives. We currently are exploiting different preprocessing and post-processing steps to reduce the false positives in effective ways. We also note that extension of LHT to other scientificallyinteresting structures (e.g., nanotubes, buckyballs, and other fullerenes) might be possible. References [1] D. Ballard, Generalized Hough Transform to Detect Arbitary Patterns," IEEE Trans. of Pattern Analysis and Machine Intelligence, Vol. 13 (2), pp , [2] C. Cao, T.S. Newman, and G.A. Germany, New Shape-based Auroral Oval Segmentation Driven by LLS-RHT," Pattern Recognition, Vol. 42 (5), pp , (c) Fig. 7: Result on a Sample of Simulated Carbon Nanocone Image: (a) Simulated Image, (b) Preprocessed Image, and (c) LHT Detection Result [3] R.O. Duda and P.E. Hart, Use of the Hough Transformation to Detect Lines and Curves in Pictures," Communications of the ACM, Vol. 15 (1), pp , [4] S. Hawley, Application of Sparse Sampling to Accelerate the Hough Transform," Proc., 2008 Int l Conf. on Image Processing, Computer Vision, & Pattern Recognition (IPCV 08), pp , Las Vegas, July, [5] P.V.C. Hough, Method and Means for Recognizing Complex Patterns," U.S. Patent, 3,069,654, [6] E.J. Kirkland, Advanced Computing in Electron Microscopy, Plenum Press, [7] P. Kultanen, L. Xu, and E. Oja, Randomized Hough Transform RHT," Proc., 10th Int l Conf. on Pattern Recognition, pp , Atlantic City, June, [8] J.K. Lee and M.L. Randles, Efficient Ellipse Detection using GPUbased Linear Least Squares-based Randomized Hough Transform,"
6 Proc., 2010 Int l Conf. on Image Processing, Computer Vision, and Pattern Recognition (IPCV 10), pp , Las Vegas, July, [9] J.K. Lee, B.A. Wood, and T.S. Newman, Very Fast Ellipse Detection using GPU-based RHT," Proc., 19th Int l Conf. on Pattern Recognition, pp. 1 4, Tampa, Florida, December, [10] R.A. McLaughlin, Randomized Hough Tranform: Improved Ellipse Detection with Comparison," Pattern Recognition Letters, Vol. 19 (3 4), pp , [11] E.S. Mandell, Electron Beam Characterization of Carbon Nanostructures, Ph.D. Dissertation at the University of Missouri-Rolla, [12] A. Pietrowcew, Face Detection in Colour Images using Fuzzy Hough Transform," Opto-Electronics Review, Vol. 11 (3), pp , [13] R. Strzodka, I. Ihrke, and M. Magnor, A Graphics Hardware Implementation of the Generalized Hough for Fast Object Recognition, Scale, and 3D Pose Detection," Proc., Int l Conf. on Image Analysis and Processing 03, pp , Barcelona, September, [14] M. Ujaldon, A. Ruiz, and N. Guil, On the Computation of the Circle Hough Transform by a GPU Rasterizer," Pattern Recognition Letters, Vol. 29 (3), pp , [15] L. Xu, E. Oja, and P. Kultanen, A New Curve Detection Method: Randomized Hough Transform (RHT)," Pattern Recognition Letters, Vol. 11 (5), pp , [16] X. Yu, H.C. Lai, S.X.F. Liu, and H.W. Leong, A Gridding Hough Transform for Detecting the Straight Lines in Sports Video," Proc., IEEE Int l Conf. on Multimedia and Expo, pp. 1 4, Amsterdam, July, [17] S. Zhang and Z. Liu, A Robust, Real-Time Ellipse Detector," Pattern Recognition, Vol. 38 (2), pp , [18] T.Y. Zhang and C.Y. Suen, A Fast Parallel Algorithm for Thinning Digital Patterns, Communications of the ACM, Vol. 27 (3), pp , 1984.
HOUGH TRANSFORM CS 6350 C V
HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges
More informationPattern recognition systems Lab 3 Hough Transform for line detection
Pattern recognition systems Lab 3 Hough Transform for line detection 1. Objectives The main objective of this laboratory session is to implement the Hough Transform for line detection from edge images.
More informationHOUGH TRANSFORM FOR INTERIOR ORIENTATION IN DIGITAL PHOTOGRAMMETRY
HOUGH TRANSFORM FOR INTERIOR ORIENTATION IN DIGITAL PHOTOGRAMMETRY Sohn, Hong-Gyoo, Yun, Kong-Hyun Yonsei University, Korea Department of Civil Engineering sohn1@yonsei.ac.kr ykh1207@yonsei.ac.kr Yu, Kiyun
More informationAn Extension to Hough Transform Based on Gradient Orientation
An Extension to Hough Transform Based on Gradient Orientation Tomislav Petković and Sven Lončarić University of Zagreb Faculty of Electrical and Computer Engineering Unska 3, HR-10000 Zagreb, Croatia Email:
More informationModel Fitting: The Hough transform I
Model Fitting: The Hough transform I Guido Gerig, CS6640 Image Processing, Utah Credit: Svetlana Lazebnik (Computer Vision UNC Chapel Hill, 2008) Fitting Parametric Models: Beyond Lines Choose a parametric
More informationStraight Lines and Hough
09/30/11 Straight Lines and Hough Computer Vision CS 143, Brown James Hays Many slides from Derek Hoiem, Lana Lazebnik, Steve Seitz, David Forsyth, David Lowe, Fei-Fei Li Project 1 A few project highlights
More informationRobust Ring Detection In Phase Correlation Surfaces
Griffith Research Online https://research-repository.griffith.edu.au Robust Ring Detection In Phase Correlation Surfaces Author Gonzalez, Ruben Published 2013 Conference Title 2013 International Conference
More informationComputer and Machine Vision
Computer and Machine Vision Lecture Week 7 Part-1 (Convolution Transform Speed-up and Hough Linear Transform) February 26, 2014 Sam Siewert Outline of Week 7 Basic Convolution Transform Speed-Up Concepts
More informationA Statistical Method for Peak Localization in Hough Space by Analysing Butterflies
A Statistical Method for Peak Localization in Hough Space by Analysing Butterflies Zezhong Xu 1,2 and Bok-Suk Shin 1 1 Department of Computer Science, The University of Auckland Auckland, New Zealand 2
More informationSemi-Automatic Detection of Cervical Vertebrae in X-ray Images Using Generalized Hough Transform
Semi-Automatic Detection of Cervical Vertebrae in X-ray Images Using Generalized Hough Transform Mohamed Amine LARHMAM, Saïd MAHMOUDI and Mohammed BENJELLOUN Faculty of Engineering, University of Mons,
More informationFitting: The Hough transform
Fitting: The Hough transform Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not vote consistently for any single model Missing data
More informationPerception IV: Place Recognition, Line Extraction
Perception IV: Place Recognition, Line Extraction Davide Scaramuzza University of Zurich Margarita Chli, Paul Furgale, Marco Hutter, Roland Siegwart 1 Outline of Today s lecture Place recognition using
More informationTypes of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection
Why Edge Detection? How can an algorithm extract relevant information from an image that is enables the algorithm to recognize objects? The most important information for the interpretation of an image
More informationFitting: The Hough transform
Fitting: The Hough transform Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not vote consistently for any single model Missing data
More informationFitting: The Hough transform
Fitting: The Hough transform Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not vote consistently for any single model Missing data
More informationGENERALIZING THE HOUGH TRANSFORM TO DETECT ARBITRARY SHAPES. D. H. Ballard Pattern Recognition Vol. 13 No
GENERALIZING THE HOUGH TRANSFORM TO DETECT ARBITRARY SHAPES D. H. Ballard Pattern Recognition Vol. 13 No. 2 1981 What is the generalized Hough (Huff) transform used for? Hough transform is a way of encoding
More informationCoarse-to-Fine Search Technique to Detect Circles in Images
Int J Adv Manuf Technol (1999) 15:96 102 1999 Springer-Verlag London Limited Coarse-to-Fine Search Technique to Detect Circles in Images M. Atiquzzaman Department of Electrical and Computer Engineering,
More informationFPGA Implementation of a Memory-Efficient Hough Parameter Space for the Detection of Lines
FPGA Implementation of a Memory-Efficient Hough Parameter Space for the Detection of Lines David Northcote*, Louise H. Crockett, Paul Murray Department of Electronic and Electrical Engineering, University
More informationEdge Detection. EE/CSE 576 Linda Shapiro
Edge Detection EE/CSE 576 Linda Shapiro Edge Attneave's Cat (1954) 2 Origin of edges surface normal discontinuity depth discontinuity surface color discontinuity illumination discontinuity Edges are caused
More informationAnnouncements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10
Announcements Assignment 2 due Tuesday, May 4. Edge Detection, Lines Midterm: Thursday, May 6. Introduction to Computer Vision CSE 152 Lecture 10 Edges Last Lecture 1. Object boundaries 2. Surface normal
More informationEECS 442 Computer vision. Fitting methods
EECS 442 Computer vision Fitting methods - Problem formulation - Least square methods - RANSAC - Hough transforms - Multi-model fitting - Fitting helps matching! Reading: [HZ] Chapters: 4, 11 [FP] Chapters:
More informationE0005E - Industrial Image Analysis
E0005E - Industrial Image Analysis The Hough Transform Matthew Thurley slides by Johan Carlson 1 This Lecture The Hough transform Detection of lines Detection of other shapes (the generalized Hough transform)
More informationLecture 15: Segmentation (Edge Based, Hough Transform)
Lecture 15: Segmentation (Edge Based, Hough Transform) c Bryan S. Morse, Brigham Young University, 1998 000 Last modified on February 3, 000 at :00 PM Contents 15.1 Introduction..............................................
More informationA Survey of Light Source Detection Methods
A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light
More informationNon-analytic object recognition using the Hough transform with the matching technique
Non-analytic object recognition using the Hough transform with the matching technique P.-K. Ser, BEng W.-C. Siu, CEng, FlEE Indexing terms: Detection, Transformers Abstract: By using the combination of
More informationAn Efficient Randomized Algorithm for Detecting Circles
Computer Vision and Image Understanding 83, 172 191 (2001) doi:10.1006/cviu.2001.0923, available online at http://www.idealibrary.com on An Efficient Randomized Algorithm for Detecting Circles Teh-Chuan
More informationModel Fitting. Introduction to Computer Vision CSE 152 Lecture 11
Model Fitting CSE 152 Lecture 11 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 10: Grouping and Model Fitting What to do with edges? Segment linked edge chains into curve features (e.g.,
More informationDistance and Angles Effect in Hough Transform for line detection
Distance and Angles Effect in Hough Transform for line detection Qussay A. Salih Faculty of Information Technology Multimedia University Tel:+603-8312-5498 Fax:+603-8312-5264. Abdul Rahman Ramli Faculty
More informationFAST RANDOMIZED ALGORITHM FOR CIRCLE DETECTION BY EFFICIENT SAMPLING
0 th February 013. Vol. 48 No. 005-013 JATIT & LLS. All rights reserved. ISSN: 199-8645 www.jatit.org E-ISSN: 1817-3195 FAST RANDOMIZED ALGORITHM FOR CIRCLE DETECTION BY EFFICIENT SAMPLING LIANYUAN JIANG,
More informationRectangle Detection based on a Windowed Hough Transform
Rectangle Detection based on a Windowed Hough Transform Cláudio Rosito Jung and Rodrigo Schramm UNISINOS - Universidade do Vale do Rio dos Sinos Ciências Exatas e Tecnológicas Av. UNISINOS, 950. São Leopoldo,
More informationObject Detection from Complex Background Image Using Circular Hough Transform
S. Bindu et al Int. Journal of Engineering Research and Applications RESEARCH ARTICLE OPEN ACCESS Object Detection from Complex Background Image Using Circular Hough Transform S. Bindu 1, S. Prudhvi 2,
More informationFitting. Lecture 8. Cristian Sminchisescu. Slide credits: K. Grauman, S. Seitz, S. Lazebnik, D. Forsyth, J. Ponce
Fitting Lecture 8 Cristian Sminchisescu Slide credits: K. Grauman, S. Seitz, S. Lazebnik, D. Forsyth, J. Ponce Fitting We want to associate a model with observed features [Fig from Marszalek & Schmid,
More information10/03/11. Model Fitting. Computer Vision CS 143, Brown. James Hays. Slides from Silvio Savarese, Svetlana Lazebnik, and Derek Hoiem
10/03/11 Model Fitting Computer Vision CS 143, Brown James Hays Slides from Silvio Savarese, Svetlana Lazebnik, and Derek Hoiem Fitting: find the parameters of a model that best fit the data Alignment:
More informationHOUGH TRANSFORM. Plan for today. Introduction to HT. An image with linear structures. INF 4300 Digital Image Analysis
INF 4300 Digital Image Analysis HOUGH TRANSFORM Fritz Albregtsen 14.09.2011 Plan for today This lecture goes more in detail than G&W 10.2! Introduction to Hough transform Using gradient information to
More informationLecture 9 Fitting and Matching
Lecture 9 Fitting and Matching Problem formulation Least square methods RANSAC Hough transforms Multi- model fitting Fitting helps matching! Reading: [HZ] Chapter: 4 Estimation 2D projective transformation
More informationA split-and-merge framework for 2D shape summarization
A split-and-merge framework for 2D shape summarization Demetrios Gerogiannis Christophoros Nikou Aristidis Likas Department of Computer Science, University of Ioannina, PO Box 1186, 45110 Ioannina, Greece
More informationAn Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy
An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy Chenyang Xu 1, Siemens Corporate Research, Inc., Princeton, NJ, USA Xiaolei Huang,
More informationAppearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization
Appearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization Jung H. Oh, Gyuho Eoh, and Beom H. Lee Electrical and Computer Engineering, Seoul National University,
More informationELLIPSE DETECTION USING SAMPLING CONSTRAINTS. Yi Tang and Sargur N. Srihari
ELLIPSE DETECTION USING SAMPLING CONSTRAINTS Yi Tang and Sargur N. Srihari Center of Excellence for Document Analysis and Recognition University at Buffalo, The State University of New York Amherst, New
More informationA Real-Time Ellipse Detection Based on Edge Grouping
Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 A Real-Time Ellipse Detection Based on Edge Grouping Thanh Minh Nguyen, Siddhant
More informationEdges and Lines Readings: Chapter 10: better edge detectors line finding circle finding
Edges and Lines Readings: Chapter 10: 10.2.3-10.3 better edge detectors line finding circle finding 1 Lines and Arcs Segmentation In some image sets, lines, curves, and circular arcs are more useful than
More informationSkew Detection and Correction of Document Image using Hough Transform Method
Skew Detection and Correction of Document Image using Hough Transform Method [1] Neerugatti Varipally Vishwanath, [2] Dr.T. Pearson, [3] K.Chaitanya, [4] MG JaswanthSagar, [5] M.Rupesh [1] Asst.Professor,
More informationOBJECT detection in general has many applications
1 Implementing Rectangle Detection using Windowed Hough Transform Akhil Singh, Music Engineering, University of Miami Abstract This paper implements Jung and Schramm s method to use Hough Transform for
More informationInstance-level recognition
Instance-level recognition 1) Local invariant features 2) Matching and recognition with local features 3) Efficient visual search 4) Very large scale indexing Matching of descriptors Matching and 3D reconstruction
More informationIssues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They can be performed sequentially or simultaneou
an edge image, nd line or curve segments present Given the image. in Line and Curves Detection 1 Issues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They
More informationDetecting Elliptic Objects Using Inverse. Hough{Transform. Joachim Hornegger, Dietrich W. R. Paulus. The following paper will appear in the
0 Detecting Elliptic Objects Using Inverse Hough{Transform Joachim Hornegger, Dietrich W R Paulus The following paper will appear in the Proceedings of the International Conference on Image Processing:
More informationHough Transform and RANSAC
CS4501: Introduction to Computer Vision Hough Transform and RANSAC Various slides from previous courses by: D.A. Forsyth (Berkeley / UIUC), I. Kokkinos (Ecole Centrale / UCL). S. Lazebnik (UNC / UIUC),
More informationDetecting square-shaped objects using the Hough transform
2013, TextRoad Publication ISSN 2090-4304 Journal of Basic and Applied Scientific Research www.textroad.com Detecting square-shaped objects using the Hough transform Seyed Farzad Moosavizade 1,*, Seyed
More informationFitting. Instructor: Jason Corso (jjcorso)! web.eecs.umich.edu/~jjcorso/t/598f14!! EECS Fall 2014! Foundations of Computer Vision!
Fitting EECS 598-08 Fall 2014! Foundations of Computer Vision!! Instructor: Jason Corso (jjcorso)! web.eecs.umich.edu/~jjcorso/t/598f14!! Readings: FP 10; SZ 4.3, 5.1! Date: 10/8/14!! Materials on these
More informationHomography estimation
RANSAC continued Homography estimation x w? ~x img H~x w Homography estimation? x img ~x w = H 1 ~x img Homography estimation (0,0) (1,0) (6.4,2.8) (8.0,2.9) (5.6, 4.0) (7.8, 4.2) (0,1) (1,1) Ah =0s.tkhk
More informationLecture 8 Fitting and Matching
Lecture 8 Fitting and Matching Problem formulation Least square methods RANSAC Hough transforms Multi-model fitting Fitting helps matching! Reading: [HZ] Chapter: 4 Estimation 2D projective transformation
More informationImproving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,
More informationFeature Matching and Robust Fitting
Feature Matching and Robust Fitting Computer Vision CS 143, Brown Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial Project 2 questions? This
More informationChapter 11 Arc Extraction and Segmentation
Chapter 11 Arc Extraction and Segmentation 11.1 Introduction edge detection: labels each pixel as edge or no edge additional properties of edge: direction, gradient magnitude, contrast edge grouping: edge
More informationFinding 2D Shapes and the Hough Transform
CS 4495 Computer Vision Finding 2D Shapes and the Aaron Bobick School of Interactive Computing Administrivia Today: Modeling Lines and Finding them CS4495: Problem set 1 is still posted. Please read the
More informationPart-Based Skew Estimation for Mathematical Expressions
Soma Shiraishi, Yaokai Feng, and Seiichi Uchida shiraishi@human.ait.kyushu-u.ac.jp {fengyk,uchida}@ait.kyushu-u.ac.jp Abstract We propose a novel method for the skew estimation on text images containing
More informationLecture 8: Fitting. Tuesday, Sept 25
Lecture 8: Fitting Tuesday, Sept 25 Announcements, schedule Grad student extensions Due end of term Data sets, suggestions Reminder: Midterm Tuesday 10/9 Problem set 2 out Thursday, due 10/11 Outline Review
More informationEdge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick
Edge Detection CSE 576 Ali Farhadi Many slides from Steve Seitz and Larry Zitnick Edge Attneave's Cat (1954) Origin of edges surface normal discontinuity depth discontinuity surface color discontinuity
More informationMORPHOLOGICAL EDGE DETECTION AND CORNER DETECTION ALGORITHM USING CHAIN-ENCODING
MORPHOLOGICAL EDGE DETECTION AND CORNER DETECTION ALGORITHM USING CHAIN-ENCODING Neeta Nain, Vijay Laxmi, Ankur Kumar Jain & Rakesh Agarwal Department of Computer Engineering Malaviya National Institute
More informationEdge Detection. Announcements. Edge detection. Origin of Edges. Mailing list: you should have received messages
Announcements Mailing list: csep576@cs.washington.edu you should have received messages Project 1 out today (due in two weeks) Carpools Edge Detection From Sandlot Science Today s reading Forsyth, chapters
More informationMultimedia Computing: Algorithms, Systems, and Applications: Edge Detection
Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides
More informationA Robust Wipe Detection Algorithm
A Robust Wipe Detection Algorithm C. W. Ngo, T. C. Pong & R. T. Chin Department of Computer Science The Hong Kong University of Science & Technology Clear Water Bay, Kowloon, Hong Kong Email: fcwngo, tcpong,
More informationRESTORATION OF DEGRADED DOCUMENTS USING IMAGE BINARIZATION TECHNIQUE
RESTORATION OF DEGRADED DOCUMENTS USING IMAGE BINARIZATION TECHNIQUE K. Kaviya Selvi 1 and R. S. Sabeenian 2 1 Department of Electronics and Communication Engineering, Communication Systems, Sona College
More informationObject detection using non-redundant local Binary Patterns
University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2010 Object detection using non-redundant local Binary Patterns Duc Thanh
More informationEdges and Lines Readings: Chapter 10: better edge detectors line finding circle finding
Edges and Lines Readings: Chapter 10: 10.2.3-10.3 better edge detectors line finding circle finding 1 Lines and Arcs Segmentation In some image sets, lines, curves, and circular arcs are more useful than
More informationSmall-scale objects extraction in digital images
102 Int'l Conf. IP, Comp. Vision, and Pattern Recognition IPCV'15 Small-scale objects extraction in digital images V. Volkov 1,2 S. Bobylev 1 1 Radioengineering Dept., The Bonch-Bruevich State Telecommunications
More informationA Miniature-Based Image Retrieval System
A Miniature-Based Image Retrieval System Md. Saiful Islam 1 and Md. Haider Ali 2 Institute of Information Technology 1, Dept. of Computer Science and Engineering 2, University of Dhaka 1, 2, Dhaka-1000,
More informationOn a fast discrete straight line segment detection
On a fast discrete straight line segment detection Ali Abdallah, Roberto Cardarelli, Giulio Aielli University of Rome Tor Vergata Abstract Detecting lines is one of the fundamental problems in image processing.
More informationLecture 9: Hough Transform and Thresholding base Segmentation
#1 Lecture 9: Hough Transform and Thresholding base Segmentation Saad Bedros sbedros@umn.edu Hough Transform Robust method to find a shape in an image Shape can be described in parametric form A voting
More informationDetecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds
9 1th International Conference on Document Analysis and Recognition Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds Weihan Sun, Koichi Kise Graduate School
More informationObject Classification Using Tripod Operators
Object Classification Using Tripod Operators David Bonanno, Frank Pipitone, G. Charmaine Gilbreath, Kristen Nock, Carlos A. Font, and Chadwick T. Hawley US Naval Research Laboratory, 4555 Overlook Ave.
More informationAN APPROACH FOR GENERIC DETECTION OF CONIC FORM
1 TH INTERNATIONAL CONFERENCE ON GEOMETRY AND GRAPHICS 6 ISGG 6-1 AUGUST, 6, SALVADOR, BRAZIL AN APPROACH FOR GENERIC DETECTION OF CONIC FORM Maysa G. MACEDO 1, and Aura CONCI 1 1 Fluminense Federal University,
More informationObject Shape Recognition in Image for Machine Vision Application
Object Shape Recognition in Image for Machine Vision Application Mohd Firdaus Zakaria, Hoo Seng Choon, and Shahrel Azmin Suandi Abstract Vision is the most advanced of our senses, so it is not surprising
More informationComputer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han
Computer Vision 10. Segmentation Computer Engineering, Sejong University Dongil Han Image Segmentation Image segmentation Subdivides an image into its constituent regions or objects - After an image has
More informationGenetic Fourier Descriptor for the Detection of Rotational Symmetry
1 Genetic Fourier Descriptor for the Detection of Rotational Symmetry Raymond K. K. Yip Department of Information and Applied Technology, Hong Kong Institute of Education 10 Lo Ping Road, Tai Po, New Territories,
More informationRestoring Chinese Documents Images Based on Text Boundary Lines
Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 Restoring Chinese Documents Images Based on Text Boundary Lines Hong Liu Key Laboratory
More informationInstance-level recognition
Instance-level recognition 1) Local invariant features 2) Matching and recognition with local features 3) Efficient visual search 4) Very large scale indexing Matching of descriptors Matching and 3D reconstruction
More informationPerceptual Quality Improvement of Stereoscopic Images
Perceptual Quality Improvement of Stereoscopic Images Jong In Gil and Manbae Kim Dept. of Computer and Communications Engineering Kangwon National University Chunchon, Republic of Korea, 200-701 E-mail:
More informationIOSR Journal of Electronics and communication Engineering (IOSR-JCE) ISSN: , ISBN: , PP:
IOSR Journal of Electronics and communication Engineering (IOSR-JCE) ISSN: 2278-2834, ISBN: 2278-8735, PP: 48-53 www.iosrjournals.org SEGMENTATION OF VERTEBRAE FROM DIGITIZED X-RAY IMAGES Ashwini Shivdas
More informationROTATION INVARIANT TRANSFORMS IN TEXTURE FEATURE EXTRACTION
ROTATION INVARIANT TRANSFORMS IN TEXTURE FEATURE EXTRACTION GAVLASOVÁ ANDREA, MUDROVÁ MARTINA, PROCHÁZKA ALEŠ Prague Institute of Chemical Technology Department of Computing and Control Engineering Technická
More informationEstimation of Multiple Illuminants from a Single Image of Arbitrary Known Geometry*
Estimation of Multiple Illuminants from a Single Image of Arbitrary Known Geometry* Yang Wang, Dimitris Samaras Computer Science Department, SUNY-Stony Stony Brook *Support for this research was provided
More informationMotion Detection. Final project by. Neta Sokolovsky
Motion Detection Final project by Neta Sokolovsky Introduction The goal of this project is to recognize a motion of objects found in the two given images. This functionality is useful in the video processing
More informationCS 231A Computer Vision (Winter 2014) Problem Set 3
CS 231A Computer Vision (Winter 2014) Problem Set 3 Due: Feb. 18 th, 2015 (11:59pm) 1 Single Object Recognition Via SIFT (45 points) In his 2004 SIFT paper, David Lowe demonstrates impressive object recognition
More informationSkeletonization Algorithm for Numeral Patterns
International Journal of Signal Processing, Image Processing and Pattern Recognition 63 Skeletonization Algorithm for Numeral Patterns Gupta Rakesh and Kaur Rajpreet Department. of CSE, SDDIET Barwala,
More informationA window-based inverse Hough transform
Pattern Recognition 33 (2000) 1105}1117 A window-based inverse Hough transform A.L. Kesidis, N. Papamarkos* Electric Circuits Analysis Laboratory, Department of Electrical Computer Engineering, Democritus
More informationEE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm
EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant
More informationEdge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels
Edge Detection Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface
More informationDetecting Multiple Symmetries with Extended SIFT
1 Detecting Multiple Symmetries with Extended SIFT 2 3 Anonymous ACCV submission Paper ID 388 4 5 6 7 8 9 10 11 12 13 14 15 16 Abstract. This paper describes an effective method for detecting multiple
More informationMULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION
MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of
More informationAn Edge-Based Approach to Motion Detection*
An Edge-Based Approach to Motion Detection* Angel D. Sappa and Fadi Dornaika Computer Vison Center Edifici O Campus UAB 08193 Barcelona, Spain {sappa, dornaika}@cvc.uab.es Abstract. This paper presents
More informationCHAPTER 1 INTRODUCTION
CHAPTER 1 INTRODUCTION 1.1 Introduction Pattern recognition is a set of mathematical, statistical and heuristic techniques used in executing `man-like' tasks on computers. Pattern recognition plays an
More informationEdge linking. Two types of approaches. This process needs to be able to bridge gaps in detected edges due to the reason mentioned above
Edge linking Edge detection rarely finds the entire set of edges in an image. Normally there are breaks due to noise, non-uniform illumination, etc. If we want to obtain region boundaries (for segmentation)
More informationHidden Loop Recovery for Handwriting Recognition
Hidden Loop Recovery for Handwriting Recognition David Doermann Institute of Advanced Computer Studies, University of Maryland, College Park, USA E-mail: doermann@cfar.umd.edu Nathan Intrator School of
More informationA new gray level based Hough transform for region extraction: An application to IRS images 1
Ž. Pattern Recognition Letters 19 1998 197 204 A new gray level based Hough transform for region extraction: An application to IRS images 1 B. Uma Shankar, C.A. Murthy 2, S.K. Pal ) Machine Intelligence
More informationarxiv: v1 [cs.cv] 15 Nov 2018
SKETCH BASED REDUCED MEMORY HOUGH TRANSFORM Levi Offen & Michael Werman Computer Science The Hebrew University of Jerusalem Jerusalem, Israel arxiv:1811.06287v1 [cs.cv] 15 Nov 2018 ABSTRACT This paper
More informationPixels. Orientation π. θ π/2 φ. x (i) A (i, j) height. (x, y) y(j)
4th International Conf. on Document Analysis and Recognition, pp.142-146, Ulm, Germany, August 18-20, 1997 Skew and Slant Correction for Document Images Using Gradient Direction Changming Sun Λ CSIRO Math.
More informationVideo shot segmentation using late fusion technique
Video shot segmentation using late fusion technique by C. Krishna Mohan, N. Dhananjaya, B.Yegnanarayana in Proc. Seventh International Conference on Machine Learning and Applications, 2008, San Diego,
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 10 130221 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Canny Edge Detector Hough Transform Feature-Based
More informationContent Based Image Retrieval: Survey and Comparison between RGB and HSV model
Content Based Image Retrieval: Survey and Comparison between RGB and HSV model Simardeep Kaur 1 and Dr. Vijay Kumar Banga 2 AMRITSAR COLLEGE OF ENGG & TECHNOLOGY, Amritsar, India Abstract Content based
More informationShort Survey on Static Hand Gesture Recognition
Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of
More information