Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Size: px
Start display at page:

Download "Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN"

Transcription

1 Shape-invariant object detection with large scale changes John DeCatrel Department of Computer Science, Florida State University, Tallahassee, FL Abstract This paper reports extensions to an innovative object detection and pose estimation method for non-analytic shapes, potentially useful to machine or robotic vision systems. Recently we demonstrated a shape-invariant recognition technique based upon the generalized Hough transform that is invariant to large planar changes in object position and rotation, as well as small changes in scale. The method works even for cases where objects are moderately occluded. It is of comparatively low complexity (O(n 2 )), where n is the edge length in pixels. The original technique has now been extended to detect and report large scale changes and multiple instances of the transformed prototype. Preliminary results have been obtained and are demonstrated. These improvements do not increase the original time complexity. Hallmarks of the original technique include the following. A two-stage process first hypothesizes target locations. The second stage (visual confirmation) reports estimated pose (position and orientation) with respect to a prototype model. The calculation of rotationally invariant R-table indices use polar edge pixel pairs pair pixels found on a line normal to an object s edge. To incorporate the detection of moderate scale changes, distances between all valid edgel polar pairs in the image are acquired. Each distance is compared to a corresponding distance in the prototype, and stored as evidence of a scale change. For each target instance, a modal aver-age is the reported scale. Multiple targets are found by adaptively setting a threshold in Hough parameter space. This threshold responds to an inference about object perimeter size, i.e., an edge pixel count.

2 1 Introduction One useful machine vision objective is the automatic detection of objects that have been transformed by changes in perspectivity. Suppose that an object model is stored by the vision system, perhaps in a model library. One or more instances of the object may appear in a digitized image, acquired by a video camera. With increasing distance, the two-dimensional image is scaled down, due to perspective projection. For sufficiently long camera-toobject distances, this transformation can be accurately approximated by isotropic scaling. The technique and implementation described herein extends our recently reported method to identify objects that are moved and rotated in the plane, with respect to a canonically posed prototype [1]. Such apparent changes may also be effected by a change in viewpoint, rather than object movement. The extension allows much greater scale changes than previously reported. Furthermore, multiple instances of target objects are now allowed, and these can be individually transformed. Tolerance to moderate amounts of occlusion is preserved. Note that this technique adds scale and rotation invariance to the generalized Hough transform s (GHT) position invariance. This is performed at a lower time complexity than the few previously reported methods of comparable capability. Performance degrades as occlusion, scale change, and scene complexity increase, but it is sufficiently good for many object detection applications found in vision research and engineering environments. A two-stage process first hypothesizes target locations. The second stage (visual confirmation) reports estimated pose (position and orientation) and scale with respect to a prototype model. The second stage efficiently reuses software code from the first stage. Unlike some other GHT methods, no specialized searching of the parameter (voting) space is needed for many applications. Adaptive thresholding provides for the location of multiple, individually transformed object instances automatically. The method has been implemented with close attention to computational efficiency. 1.1 Assumptions Target detection assumed pre-processing to obtain decent binary images [2]. We proceeded with this limitation to make the problem more tractable, and various binarization techniques are readily available. One tradeoff in using binary edges is that quantization of the edge gradient direction can be much poorer. In most gradient edge detectors, gray level changes along vertical and (separately) horizontal directions provide for many possible gradient directions. This problem was resolved to a sufficient degree by computing the tangent of non-analytic contours over a small neighborhood of pixels for

3 each edgel. (In this paper, edgel is restrictively defined to mean edge pixel.) One complication is that internal object edges are undifferentiated from hull edges. Some internal edges and edgels can be negotiated with little difficulty. However, extensive internal edges, especially as would arise from internal reflection interactions or sudden surface variations, should be eliminated by pre-processing. A moderate amount of random noise is thwarting, and should also be pre-filtered. We presume that these limitations are implicit in the comparative methods as well, which are mentioned later. To increase the utility of our software, it is advisable to obtain or design good pre-processing methods. The objective is to better obtain useful, lownoise silhouettes automatically. Confirmation of highly cluttered or occluded objects might proceed employing reasoning or knowledge about object prototypes and domain scenes. 2 Methodology 2.1 Overview This GHT begins as usual by constructing an R-table from a prototype object. Very similar descriptions apply to building the table, or using the table during a detection trial. Essentially, the R-table is a lookup table wherein each edge point can find where a fixed reference point (L) is located with respect to that edge point. To index into the table, an edgel must first find an opposite (polar) edgel, in the pixel's normal direction. The difference of corresponding edgel-pair arctangents provides a rotationally invariant index. The table stores a vector to L. Often, several polar pair edgels have the same arctangent difference value. Then, several row entries must be stored, one entry for each edgel pair. To decrease sensitivity to finding the exact same edgel pair at construction time and at trial time, we index into several neighboring rows of the R-table, to examine a small range of tangent differences. In a standard GHT scheme, each edgel would be required to vote for every entry per row indexed. Hence, to suppress excessive voting, each entry in each indexed row is examined by secondary indices: local edge curvatures at polar points. Only entries that match these additional features within a reasonable range are used as votes. To make the method scale invariant, we make opportunistic use of data that has previously been determined. For example, upon determining a polar pair, the distance between pairs is then known; the distance is known for the image instance(s) as well as the model protype. Hence, for each edgel, distance ratios between model and image instance can be used as evidence for scale change. Specifically, each edgel votes for one or more L locations that have been scaled by the intra-pair distance ratio. Note that curvature values in the R-table used for secondary indexing are scaled as well. This tacitly assumes that curves are approximately circular arcs.

4 To detect multiple target instances, a threshold is set in the Hough parameter/voting space. This threshold is set automatically by comparing the vote count at a candidate L point to the expected number of voters the instance edgel count, which is the same as the perimeter size for unoccluded objects. For each target, the expected vote count varies linearly with scale change, the latter having been computed during the pose estimation stage. Local maxima above the threshold are reported as transformed targets. 2.2 Building the R-table (b) Figure 1: (a) Geometry and angle definitions for R-table construction. An edgel polar-pair at points A and B. (b) Vote cast by one edgel in scaled, rotated object. Although arctangents of polar pairs change, their angular difference is constant. Many real shapes will provide a sufficient number of polar-pairs for instance detection, even when moderately occluded. A binary edge image is input into the tangent-finding routine. For each edgel, a tangent value is calculated and stored as an image, registered with the input. Incident and exit positions of edges are obtained over a small local neighborhood with sub-pixel accuracy [1]. A second pass smoothes neighbor tangent values, while obtaining arctangents from a lookup table. In our current implementation, the angle value can range from 0 to 179 degrees. At the same time, the degree of curvature (κ) in a tangent

5 neighborhood is determined. This routine is similar to tangent-finding, with the tangent image used as an input. A location or reference point (L) is chosen for the prototype shape. In our trials, the location point was simply the midpoint of the maximum and minimum vertical and horizontal (x,y) extents of the object. During the target detection phase, votes by edgels should accumulate at a point corresponding to L (in registered parameter space), even when the test image contains a scaled, rotated, and partially occluded prototype. For each edgel's corresponding tangent, the parameters for a line normal to the tangent are found. Using the midpoint version of the Bresenham line drawing technique [3], pixels are visited along the line's locus until another edgel is encountered (forming a polar pair), or the visit goes beyond either the x or y limits of the image (figure 1, a). If a pair is formed, the intra-pair distance (ρ) is recorded. The difference between two arctangents of a polar pair(θ ) is a rotationinvariant index into the R-table. In the referenced figure, denote by θ Α, the arctangent at point A in degrees. φ is the angle between the x-axis and line AL. The angle between the edgel's tangent and it's vector to L, α, is then easily calculated. We have quantized tangent resolutions to one degree, therefore the R- table has 180 rows and a variable number of columns. R-table record entries, then, are 6-tuples (Θ Α, α, λ, κ Α, κ Β, ρ). Recall that Θ Α is the tangent at point A; α and λ are the direction and distance to L, respectively; κ Α and κ Β are local curvatures at edge point A and corresponding polar point B; and ρ is the inter-pair distance. The maximum number of entries in the table is n, the number of edgels in the edge image. The actual number is typically smaller than n, which reflects the redundancy of edge attributes found in typical imaged shapes. 2.3 Processing the target image The target image is first processed with the edge-producing and tangent routines. For each non-zero pixel, the tangent and normal are found. A line of pixels is visited again, in the two normal directions from the edgel. Should a non-zero pixel be encountered, its tangent is found. The difference angle (θ ) is calculated, the intra-pair distance (ρ ) is noted, and the curvature measures are recorded. θ selects a row of entries from the R-table. For each entry that is found, the two curvature measures are compared for reasonability, i.e., values fall within a pre-specified range. For scale invariance, curvatures in each entry are scaled by ρ /ρ. This operation tacitly assumes that curves are approximated by circular arcs. If the scaled curvature values are reasonable, the location point angle (α) is read from the table. The distance to L, the image of L, is calculated: λ = (ρ /ρ) λ. α and λ are used to vote for the position of L. Some edgels in the target image will produce several votes, due to false polar-pair matching. With infrequent exceptions, only one vote per edgel will register at or near the location point (L ). Hence, L should receive nearly as many votes as there are object edge pixels in the target image provided that there is indeed a match between the prototype shape and the target shape. This is employed to automatically set a threshold in the voting

6 space. If edges are significantly occluded, then this program feature fails, unless additional information is allowed. For example, the user might specify that expected target perimeters may be P% occluded. In that case, threshold setting would be considered less than fully automatic. Tangent calculations necessarily incorporate some imprecision; this is largely due to the limited resolution of typical digitized targets. Hence valid votes for L may occur over a small neighborhood. Therefore we sum tangent values over a small neighborhood, which effectively narrows and enhances peaks in the parameter space. Rotational invariance in the plane is an objective and result of the method. The normal to the tangent at any point on the edge of a shape should go through the corresponding polar point. This is true no matter to what degree the shape is rotated, assuming minimal object distortion due to image digitization or digital rotation algorithms. Invariance to isotropic scaling is also achieved, which is a common perspective projection approximation (weak perspectivity). 2.4 Pose estimation The voting process as described in section 2.3 is repeated in this, the second, stage. However, instead of casting votes, an edgel arctangent (β i ) that would have voted exactly for L indexes the R table. θ Α is retrieved, and the difference between β i and θ Α is an estimate of the target rotation angle. Variations reported by all β i are stored in a histogram. The peak histogram value is the reported rotation angle. Concurrently, for each β i, the target intra- polar-pair distance (ρ ) is calculated, and ρ is retrieved from the R table. A measure of ρ /ρ is stored in a separate histogram, and that peak is the reported scale. 3 Performance Two important engineering concerns in coding are the size of memory required and the speed of computation. Although there is often a tradeoff between these two, in our method they are almost entirely independent of one another [1]. Denote by N the image size in pixels, and n, the prototype object edgel size. Disregarding a small overhead, the data space memory requirement in bytes is 4N + 7n. Two bytes are allotted for each of the input and output image pixels. Seven bytes are required per R-table entry. Speed of computation is dictated by the nature of the method and by the implementation. Our implementation is coded in C, and it employs no floating point calculations and no divisions. While this speeds up many uniprocessor platforms, it should also prove useful for porting routines to some specialized parallel processors. Tangents and arctangents are calculated by lookup tables that have integer entries. The Bresenham line technique involves only additions and comparisons. Distance calculations do require some integer multiplications.

7 The algorithm s computation time is O(n 2 ), where n is the number of edgels in the target image. For a typical case, the time for each stage is: t = n(t 0 + pt p + rt r + r vt v ) + Nt N, where symbols are defined as: t 0 p t p r t r r' v t v n N t N general overhead for processing an edgel the number of pixels in the line that find polar-pair edgels the time to visit a line's pixel the average number of R-table entries per point the processing time required to determine if a table match is to be used for voting the average number of R-table entries per edgel that will vote the average number of (parameter space) pixels required for each voting line the time to add a vote in the voting line the number of edgels in the target image the number of pixels in the target image the time to visit a pixel while raster scanning. Hence, the time for both stages (target detection and pose estimation) is approximately 2t. Note that Nt N is the time to raster scan an input image. For most real images, it is small enough to be neglected compared to edgel processing. For example, in a 512 x 512 pixel image, Nt N contributes about 5% to t. The values for p and v depend upon the size of the target object's image and the amount of scaling being allowed, and each is O(n). The values for r and r' are shape dependent. For a boundary case, the circle, both will be close to one. Experimentally we find r to be about 4, for random cases. Currently a polar pair match is looked for in both normal directions of an edge point. It should be feasible to first determine which direction is towards the object s interior. Once the edges and tangents are calculated, the remainder of the process can be carried out by an arbitrary number of processors, as long as they have either individual, or common access to the R-table, the target image, and the output (vote accumulator) image. The order of processing of each edgel in the target image does not matter. 4 Experimental results Two examples are described to illustrate the technique's capability. Figure 2 describes an attempt to locate multiple instances of the single-hole key. The input is about 0.25 x 10 6 pixels, and 5 x 10 3 edgels. In figure 3, the test image is composed of clip-art silhouettes. The bear shape is easily found, even when about half of it's area is occluded. The occlusion is a deformation which might arise from shadowing as well as

8 other objects (in this case, a cat shape). The input image is about 150 x 10 6 pixels, and the edgel count is about 3 x Our method yields the best results for irregular shapes ones with a variety of difference angles. It does work for more regular shapes, but these tend to yield less pronounced peaks [1]. The method is useful with respect to images that have incomplete edge patterns, since no edge following is used. The method fails for targets that have one side completely obscured. In order to perform a lookup in the R- table, two points are needed. If a large number of pixels have no matching pixels along their normals, then there will not be sufficient information available for detection. 5 Related research The generalized Hough Transform is a well-known method for detecting arbitrary shapes, and it was first proposed by Ballard [4]. Extensions to the GHT have been proposed more recently to include scale and rotation invariance. Davies [5] provides a thorough discussion of the GHT, and he reviews some proposed ways to lower the GHT's computational complexity from brute force (O(N 4 )) if scale or rotation invariance is incorporated. Here, N is one dimension of the Hough parameter (voting) space. That discussion is largely limited to the processing of analytic shapes i.e., shapes described by closed-form elementary functions. More recently a few researchers have described methods to implement scale- and rotation-invariant GHT [6], [7]. Each method suffers from one or more limitations compared to ours: the method is of higher time complexity, or much less computationally efficient, or the method cannot tolerate any occlusion. These methods have been more fully summarized in [1]. 6 Conclusion We have described a GHT object detection and pose estimation method that is invariant to moderate changes in object position, orientation, and scale. The method is of comparatively low time complexity (O(n 2 )). Our algorithm is distinguished from other methods in several ways, resulting in a software tool that we hope may be practical and useful within myriad vision research and engineering settings. For example, the method in not thwarted by a fair amount of object occlusion. Multiple object instances can be found at once. The implemented software pays close attention to computational efficiency. Initial results seem promising, however more experiments are required to quantify performance, and to characterize performance degradation with image complexity.

9 References 1. Weinstein, L. and DeCatrel, J. Scale- and rotation-invariant Hough transform for fast silhouette object detection. CS Tech. Rept. Dept. of Computer Science, Florida State University, no , Shen, J. and Castan, S. Further results on DRF method for edge detection. Proc. 9th Intl Conf.Pattern Recognition, Rome, Foley, J. et al. Computer Graphics: Principles and Practice, pp , Addison-Wesley, Reading, MA., Ballard, D. Generalizing the Hough transform to detect arbitrary shapes. Pattern Recognition, 1981, 13, Davies, E. R. Machine Vision. Theory, Algorithms, Practicalities. Academic Press, San Diego, Ser, P.K. and Siu, W.C. Non-analytic object recognition using the Hough transform with the matching technique. IEE Proc.-Comput. Digit. Tech., 1994, 141(1), Jeng, S.C. and Tsai W.H. Scale- and orientation-invariant generalized Hough transform a new approach. Pattern Recognition, 1991, 24(11),

10 (a) (b) (c)

11 (d) Figure 2: (a) Prototype silhouette of a digitized key. Localization point (L) is marked by +. (b) Input image contains two partly occluded, transformed target instances. (c) Pose module reports one instance at -25 degrees rotation, scaled at 0.8. Another instance is reported at 177 degrees, and scaled at 0.5. The algorithm has superimposed its pose estimate on the input edge image.(d) Elevation plot (histogram) of the Hough parameter space.

12 (a) (b) Figure 3: (a) Clip art shapes include a bear prototype (upper left), and another instance with 50% occlusion of it's area (by a cat shape). (b) Plot of votes, where the two highest peaks correspond to the two instances.

HOUGH TRANSFORM CS 6350 C V

HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges

More information

Lecture 15: Segmentation (Edge Based, Hough Transform)

Lecture 15: Segmentation (Edge Based, Hough Transform) Lecture 15: Segmentation (Edge Based, Hough Transform) c Bryan S. Morse, Brigham Young University, 1998 000 Last modified on February 3, 000 at :00 PM Contents 15.1 Introduction..............................................

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

GENERALIZING THE HOUGH TRANSFORM TO DETECT ARBITRARY SHAPES. D. H. Ballard Pattern Recognition Vol. 13 No

GENERALIZING THE HOUGH TRANSFORM TO DETECT ARBITRARY SHAPES. D. H. Ballard Pattern Recognition Vol. 13 No GENERALIZING THE HOUGH TRANSFORM TO DETECT ARBITRARY SHAPES D. H. Ballard Pattern Recognition Vol. 13 No. 2 1981 What is the generalized Hough (Huff) transform used for? Hough transform is a way of encoding

More information

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Yoichi Nakaguro Sirindhorn International Institute of Technology, Thammasat University P.O. Box 22, Thammasat-Rangsit Post Office,

More information

Shape Descriptor using Polar Plot for Shape Recognition.

Shape Descriptor using Polar Plot for Shape Recognition. Shape Descriptor using Polar Plot for Shape Recognition. Brijesh Pillai ECE Graduate Student, Clemson University bpillai@clemson.edu Abstract : This paper presents my work on computing shape models that

More information

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels Edge Detection Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface

More information

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale. Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe presented by, Sudheendra Invariance Intensity Scale Rotation Affine View point Introduction Introduction SIFT (Scale Invariant Feature

More information

Semi-Automatic Detection of Cervical Vertebrae in X-ray Images Using Generalized Hough Transform

Semi-Automatic Detection of Cervical Vertebrae in X-ray Images Using Generalized Hough Transform Semi-Automatic Detection of Cervical Vertebrae in X-ray Images Using Generalized Hough Transform Mohamed Amine LARHMAM, Saïd MAHMOUDI and Mohammed BENJELLOUN Faculty of Engineering, University of Mons,

More information

Edge linking. Two types of approaches. This process needs to be able to bridge gaps in detected edges due to the reason mentioned above

Edge linking. Two types of approaches. This process needs to be able to bridge gaps in detected edges due to the reason mentioned above Edge linking Edge detection rarely finds the entire set of edges in an image. Normally there are breaks due to noise, non-uniform illumination, etc. If we want to obtain region boundaries (for segmentation)

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10 Announcements Assignment 2 due Tuesday, May 4. Edge Detection, Lines Midterm: Thursday, May 6. Introduction to Computer Vision CSE 152 Lecture 10 Edges Last Lecture 1. Object boundaries 2. Surface normal

More information

Distance and Angles Effect in Hough Transform for line detection

Distance and Angles Effect in Hough Transform for line detection Distance and Angles Effect in Hough Transform for line detection Qussay A. Salih Faculty of Information Technology Multimedia University Tel:+603-8312-5498 Fax:+603-8312-5264. Abdul Rahman Ramli Faculty

More information

Critique: Efficient Iris Recognition by Characterizing Key Local Variations

Critique: Efficient Iris Recognition by Characterizing Key Local Variations Critique: Efficient Iris Recognition by Characterizing Key Local Variations Authors: L. Ma, T. Tan, Y. Wang, D. Zhang Published: IEEE Transactions on Image Processing, Vol. 13, No. 6 Critique By: Christopher

More information

CS443: Digital Imaging and Multimedia Perceptual Grouping Detecting Lines and Simple Curves

CS443: Digital Imaging and Multimedia Perceptual Grouping Detecting Lines and Simple Curves CS443: Digital Imaging and Multimedia Perceptual Grouping Detecting Lines and Simple Curves Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines Perceptual Grouping and Segmentation

More information

Lecture 9: Hough Transform and Thresholding base Segmentation

Lecture 9: Hough Transform and Thresholding base Segmentation #1 Lecture 9: Hough Transform and Thresholding base Segmentation Saad Bedros sbedros@umn.edu Hough Transform Robust method to find a shape in an image Shape can be described in parametric form A voting

More information

Edge detection. Gradient-based edge operators

Edge detection. Gradient-based edge operators Edge detection Gradient-based edge operators Prewitt Sobel Roberts Laplacian zero-crossings Canny edge detector Hough transform for detection of straight lines Circle Hough Transform Digital Image Processing:

More information

Color Space Invariance for Various Edge Types in Simple Images. Geoffrey Hollinger and Dr. Bruce Maxwell Swarthmore College Summer 2003

Color Space Invariance for Various Edge Types in Simple Images. Geoffrey Hollinger and Dr. Bruce Maxwell Swarthmore College Summer 2003 Color Space Invariance for Various Edge Types in Simple Images Geoffrey Hollinger and Dr. Bruce Maxwell Swarthmore College Summer 2003 Abstract This paper describes a study done to determine the color

More information

Fitting: The Hough transform

Fitting: The Hough transform Fitting: The Hough transform Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not vote consistently for any single model Missing data

More information

Finding 2D Shapes and the Hough Transform

Finding 2D Shapes and the Hough Transform CS 4495 Computer Vision Finding 2D Shapes and the Aaron Bobick School of Interactive Computing Administrivia Today: Modeling Lines and Finding them CS4495: Problem set 1 is still posted. Please read the

More information

Other Linear Filters CS 211A

Other Linear Filters CS 211A Other Linear Filters CS 211A Slides from Cornelia Fermüller and Marc Pollefeys Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin

More information

IOSR Journal of Electronics and communication Engineering (IOSR-JCE) ISSN: , ISBN: , PP:

IOSR Journal of Electronics and communication Engineering (IOSR-JCE) ISSN: , ISBN: , PP: IOSR Journal of Electronics and communication Engineering (IOSR-JCE) ISSN: 2278-2834, ISBN: 2278-8735, PP: 48-53 www.iosrjournals.org SEGMENTATION OF VERTEBRAE FROM DIGITIZED X-RAY IMAGES Ashwini Shivdas

More information

Image features. Image Features

Image features. Image Features Image features Image features, such as edges and interest points, provide rich information on the image content. They correspond to local regions in the image and are fundamental in many applications in

More information

Part-Based Skew Estimation for Mathematical Expressions

Part-Based Skew Estimation for Mathematical Expressions Soma Shiraishi, Yaokai Feng, and Seiichi Uchida shiraishi@human.ait.kyushu-u.ac.jp {fengyk,uchida}@ait.kyushu-u.ac.jp Abstract We propose a novel method for the skew estimation on text images containing

More information

Feature Detectors and Descriptors: Corners, Lines, etc.

Feature Detectors and Descriptors: Corners, Lines, etc. Feature Detectors and Descriptors: Corners, Lines, etc. Edges vs. Corners Edges = maxima in intensity gradient Edges vs. Corners Corners = lots of variation in direction of gradient in a small neighborhood

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

HOUGH TRANSFORM. Plan for today. Introduction to HT. An image with linear structures. INF 4300 Digital Image Analysis

HOUGH TRANSFORM. Plan for today. Introduction to HT. An image with linear structures. INF 4300 Digital Image Analysis INF 4300 Digital Image Analysis HOUGH TRANSFORM Fritz Albregtsen 14.09.2011 Plan for today This lecture goes more in detail than G&W 10.2! Introduction to Hough transform Using gradient information to

More information

Combining Appearance and Topology for Wide

Combining Appearance and Topology for Wide Combining Appearance and Topology for Wide Baseline Matching Dennis Tell and Stefan Carlsson Presented by: Josh Wills Image Point Correspondences Critical foundation for many vision applications 3-D reconstruction,

More information

Category vs. instance recognition

Category vs. instance recognition Category vs. instance recognition Category: Find all the people Find all the buildings Often within a single image Often sliding window Instance: Is this face James? Find this specific famous building

More information

Non-analytic object recognition using the Hough transform with the matching technique

Non-analytic object recognition using the Hough transform with the matching technique Non-analytic object recognition using the Hough transform with the matching technique P.-K. Ser, BEng W.-C. Siu, CEng, FlEE Indexing terms: Detection, Transformers Abstract: By using the combination of

More information

Lecture 7: Most Common Edge Detectors

Lecture 7: Most Common Edge Detectors #1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the

More information

Introduction. Chapter Overview

Introduction. Chapter Overview Chapter 1 Introduction The Hough Transform is an algorithm presented by Paul Hough in 1962 for the detection of features of a particular shape like lines or circles in digitalized images. In its classical

More information

Context based optimal shape coding

Context based optimal shape coding IEEE Signal Processing Society 1999 Workshop on Multimedia Signal Processing September 13-15, 1999, Copenhagen, Denmark Electronic Proceedings 1999 IEEE Context based optimal shape coding Gerry Melnikov,

More information

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection Why Edge Detection? How can an algorithm extract relevant information from an image that is enables the algorithm to recognize objects? The most important information for the interpretation of an image

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

CS664 Lecture #21: SIFT, object recognition, dynamic programming

CS664 Lecture #21: SIFT, object recognition, dynamic programming CS664 Lecture #21: SIFT, object recognition, dynamic programming Some material taken from: Sebastian Thrun, Stanford http://cs223b.stanford.edu/ Yuri Boykov, Western Ontario David Lowe, UBC http://www.cs.ubc.ca/~lowe/keypoints/

More information

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Features Points Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Finding Corners Edge detectors perform poorly at corners. Corners provide repeatable points for matching, so

More information

Performance Characterization in Computer Vision

Performance Characterization in Computer Vision Performance Characterization in Computer Vision Robert M. Haralick University of Washington Seattle WA 98195 Abstract Computer vision algorithms axe composed of different sub-algorithms often applied in

More information

What is Computer Vision?

What is Computer Vision? Perceptual Grouping in Computer Vision Gérard Medioni University of Southern California What is Computer Vision? Computer Vision Attempt to emulate Human Visual System Perceive visual stimuli with cameras

More information

Fitting: The Hough transform

Fitting: The Hough transform Fitting: The Hough transform Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not vote consistently for any single model Missing data

More information

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides

More information

Fingerprint Classification Using Orientation Field Flow Curves

Fingerprint Classification Using Orientation Field Flow Curves Fingerprint Classification Using Orientation Field Flow Curves Sarat C. Dass Michigan State University sdass@msu.edu Anil K. Jain Michigan State University ain@msu.edu Abstract Manual fingerprint classification

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Issues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They can be performed sequentially or simultaneou

Issues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They can be performed sequentially or simultaneou an edge image, nd line or curve segments present Given the image. in Line and Curves Detection 1 Issues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They

More information

IRIS SEGMENTATION OF NON-IDEAL IMAGES

IRIS SEGMENTATION OF NON-IDEAL IMAGES IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322

More information

Edge and corner detection

Edge and corner detection Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements

More information

Horus: Object Orientation and Id without Additional Markers

Horus: Object Orientation and Id without Additional Markers Computer Science Department of The University of Auckland CITR at Tamaki Campus (http://www.citr.auckland.ac.nz) CITR-TR-74 November 2000 Horus: Object Orientation and Id without Additional Markers Jacky

More information

Object and Class Recognition I:

Object and Class Recognition I: Object and Class Recognition I: Object Recognition Lectures 10 Sources ICCV 2005 short courses Li Fei-Fei (UIUC), Rob Fergus (Oxford-MIT), Antonio Torralba (MIT) http://people.csail.mit.edu/torralba/iccv2005

More information

On Resolving Ambiguities in Arbitrary-Shape extraction by the Hough Transform

On Resolving Ambiguities in Arbitrary-Shape extraction by the Hough Transform On Resolving Ambiguities in Arbitrary-Shape extraction by the Hough Transform Eugenia Montiel 1, Alberto S. Aguado 2 and Mark S. Nixon 3 1 imagis, INRIA Rhône-Alpes, France 2 University of Surrey, UK 3

More information

Robust Ring Detection In Phase Correlation Surfaces

Robust Ring Detection In Phase Correlation Surfaces Griffith Research Online https://research-repository.griffith.edu.au Robust Ring Detection In Phase Correlation Surfaces Author Gonzalez, Ruben Published 2013 Conference Title 2013 International Conference

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

MPEG-7 Visual shape descriptors

MPEG-7 Visual shape descriptors MPEG-7 Visual shape descriptors Miroslaw Bober presented by Peter Tylka Seminar on scientific soft skills 22.3.2012 Presentation Outline Presentation Outline Introduction to problem Shape spectrum - 3D

More information

A Statistical Consistency Check for the Space Carving Algorithm.

A Statistical Consistency Check for the Space Carving Algorithm. A Statistical Consistency Check for the Space Carving Algorithm. A. Broadhurst and R. Cipolla Dept. of Engineering, Univ. of Cambridge, Cambridge, CB2 1PZ aeb29 cipolla @eng.cam.ac.uk Abstract This paper

More information

Anno accademico 2006/2007. Davide Migliore

Anno accademico 2006/2007. Davide Migliore Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?

More information

Character Recognition

Character Recognition Character Recognition 5.1 INTRODUCTION Recognition is one of the important steps in image processing. There are different methods such as Histogram method, Hough transformation, Neural computing approaches

More information

Subpixel Corner Detection Using Spatial Moment 1)

Subpixel Corner Detection Using Spatial Moment 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 25 Subpixel Corner Detection Using Spatial Moment 1) WANG She-Yang SONG Shen-Min QIANG Wen-Yi CHEN Xing-Lin (Department of Control Engineering, Harbin Institute

More information

CHAPTER 5 MOTION DETECTION AND ANALYSIS

CHAPTER 5 MOTION DETECTION AND ANALYSIS CHAPTER 5 MOTION DETECTION AND ANALYSIS 5.1. Introduction: Motion processing is gaining an intense attention from the researchers with the progress in motion studies and processing competence. A series

More information

Fitting. Lecture 8. Cristian Sminchisescu. Slide credits: K. Grauman, S. Seitz, S. Lazebnik, D. Forsyth, J. Ponce

Fitting. Lecture 8. Cristian Sminchisescu. Slide credits: K. Grauman, S. Seitz, S. Lazebnik, D. Forsyth, J. Ponce Fitting Lecture 8 Cristian Sminchisescu Slide credits: K. Grauman, S. Seitz, S. Lazebnik, D. Forsyth, J. Ponce Fitting We want to associate a model with observed features [Fig from Marszalek & Schmid,

More information

University of Cambridge Engineering Part IIB Module 4F12 - Computer Vision and Robotics Mobile Computer Vision

University of Cambridge Engineering Part IIB Module 4F12 - Computer Vision and Robotics Mobile Computer Vision report University of Cambridge Engineering Part IIB Module 4F12 - Computer Vision and Robotics Mobile Computer Vision Web Server master database User Interface Images + labels image feature algorithm Extract

More information

Real-time Corner and Polygon Detection System on FPGA. Chunmeng Bi and Tsutomu Maruyama University of Tsukuba

Real-time Corner and Polygon Detection System on FPGA. Chunmeng Bi and Tsutomu Maruyama University of Tsukuba Real-time Corner and Polygon Detection System on FPGA Chunmeng Bi and Tsutomu Maruyama University of Tsukuba Outline Introduction Algorithms FPGA Implementation Experimental Results Conclusions and Future

More information

Edges and Lines Readings: Chapter 10: better edge detectors line finding circle finding

Edges and Lines Readings: Chapter 10: better edge detectors line finding circle finding Edges and Lines Readings: Chapter 10: 10.2.3-10.3 better edge detectors line finding circle finding 1 Lines and Arcs Segmentation In some image sets, lines, curves, and circular arcs are more useful than

More information

Basic Algorithms for Digital Image Analysis: a course

Basic Algorithms for Digital Image Analysis: a course Institute of Informatics Eötvös Loránd University Budapest, Hungary Basic Algorithms for Digital Image Analysis: a course Dmitrij Csetverikov with help of Attila Lerch, Judit Verestóy, Zoltán Megyesi,

More information

Schedule for Rest of Semester

Schedule for Rest of Semester Schedule for Rest of Semester Date Lecture Topic 11/20 24 Texture 11/27 25 Review of Statistics & Linear Algebra, Eigenvectors 11/29 26 Eigenvector expansions, Pattern Recognition 12/4 27 Cameras & calibration

More information

Module 7 VIDEO CODING AND MOTION ESTIMATION

Module 7 VIDEO CODING AND MOTION ESTIMATION Module 7 VIDEO CODING AND MOTION ESTIMATION Lesson 20 Basic Building Blocks & Temporal Redundancy Instructional Objectives At the end of this lesson, the students should be able to: 1. Name at least five

More information

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19 Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

More information

Robot vision review. Martin Jagersand

Robot vision review. Martin Jagersand Robot vision review Martin Jagersand What is Computer Vision? Computer Graphics Three Related fields Image Processing: Changes 2D images into other 2D images Computer Graphics: Takes 3D models, renders

More information

CITS 4402 Computer Vision

CITS 4402 Computer Vision CITS 4402 Computer Vision A/Prof Ajmal Mian Adj/A/Prof Mehdi Ravanbakhsh, CEO at Mapizy (www.mapizy.com) and InFarm (www.infarm.io) Lecture 02 Binary Image Analysis Objectives Revision of image formation

More information

CPSC 425: Computer Vision

CPSC 425: Computer Vision 1 / 45 CPSC 425: Computer Vision Instructor: Fred Tung ftung@cs.ubc.ca Department of Computer Science University of British Columbia Lecture Notes 2015/2016 Term 2 2 / 45 Menu March 3, 2016 Topics: Hough

More information

Image Analysis. Edge Detection

Image Analysis. Edge Detection Image Analysis Edge Detection Christophoros Nikou cnikou@cs.uoi.gr Images taken from: Computer Vision course by Kristen Grauman, University of Texas at Austin (http://www.cs.utexas.edu/~grauman/courses/spring2011/index.html).

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 09 130219 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Feature Descriptors Feature Matching Feature

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

Computer Vision for HCI. Topics of This Lecture

Computer Vision for HCI. Topics of This Lecture Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi

More information

CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION

CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION 60 CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION 3.1 IMPORTANCE OF OPTIC DISC Ocular fundus images provide information about ophthalmic, retinal and even systemic diseases such as hypertension, diabetes, macular

More information

Feature Selection. Ardy Goshtasby Wright State University and Image Fusion Systems Research

Feature Selection. Ardy Goshtasby Wright State University and Image Fusion Systems Research Feature Selection Ardy Goshtasby Wright State University and Image Fusion Systems Research Image features Points Lines Regions Templates 2 Corners They are 1) locally unique and 2) rotationally invariant

More information

Judging Whether Multiple Silhouettes Can Come from the Same Object

Judging Whether Multiple Silhouettes Can Come from the Same Object Judging Whether Multiple Silhouettes Can Come from the Same Object David Jacobs 1, eter Belhumeur 2, and Ian Jermyn 3 1 NEC Research Institute 2 Yale University 3 New York University Abstract. We consider

More information

Lecture 6: Edge Detection

Lecture 6: Edge Detection #1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform

More information

Corner Detection. Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology

Corner Detection. Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology Corner Detection Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology rhody@cis.rit.edu April 11, 2006 Abstract Corners and edges are two of the most important geometrical

More information

Edge detection. Goal: Identify sudden. an image. Ideal: artist s line drawing. object-level knowledge)

Edge detection. Goal: Identify sudden. an image. Ideal: artist s line drawing. object-level knowledge) Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded in the edges More compact than pixels Ideal: artist

More information

CS 534: Computer Vision 3D Model-based recognition

CS 534: Computer Vision 3D Model-based recognition CS 534: Computer Vision 3D Model-based recognition Spring 2004 Ahmed Elgammal Dept of Computer Science CS 534 3D Model-based Vision - 1 Outlines Geometric Model-Based Object Recognition Choosing features

More information

Differential Geometry: Circle Patterns (Part 1) [Discrete Conformal Mappinngs via Circle Patterns. Kharevych, Springborn and Schröder]

Differential Geometry: Circle Patterns (Part 1) [Discrete Conformal Mappinngs via Circle Patterns. Kharevych, Springborn and Schröder] Differential Geometry: Circle Patterns (Part 1) [Discrete Conformal Mappinngs via Circle Patterns. Kharevych, Springborn and Schröder] Preliminaries Recall: Given a smooth function f:r R, the function

More information

Morphological Image Processing

Morphological Image Processing Morphological Image Processing Morphology Identification, analysis, and description of the structure of the smallest unit of words Theory and technique for the analysis and processing of geometric structures

More information

Chapter 11 Arc Extraction and Segmentation

Chapter 11 Arc Extraction and Segmentation Chapter 11 Arc Extraction and Segmentation 11.1 Introduction edge detection: labels each pixel as edge or no edge additional properties of edge: direction, gradient magnitude, contrast edge grouping: edge

More information

Problem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1

Problem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1 Machine vision systems Problem definition Image acquisition Image segmentation Connected component analysis Machine vision systems - 1 Problem definition Design a vision system to see a flat world Page

More information

Fitting: Voting and the Hough Transform April 23 rd, Yong Jae Lee UC Davis

Fitting: Voting and the Hough Transform April 23 rd, Yong Jae Lee UC Davis Fitting: Voting and the Hough Transform April 23 rd, 2015 Yong Jae Lee UC Davis Last time: Grouping Bottom-up segmentation via clustering To find mid-level regions, tokens General choices -- features,

More information

IRIS recognition II. Eduard Bakštein,

IRIS recognition II. Eduard Bakštein, IRIS recognition II. Eduard Bakštein, edurard.bakstein@fel.cvut.cz 22.10.2013 acknowledgement: Andrzej Drygajlo, EPFL Switzerland Iris recognition process Input: image of the eye Iris Segmentation Projection

More information

2D/3D Geometric Transformations and Scene Graphs

2D/3D Geometric Transformations and Scene Graphs 2D/3D Geometric Transformations and Scene Graphs Week 4 Acknowledgement: The course slides are adapted from the slides prepared by Steve Marschner of Cornell University 1 A little quick math background

More information

COMPARATIVE STUDY OF IMAGE EDGE DETECTION ALGORITHMS

COMPARATIVE STUDY OF IMAGE EDGE DETECTION ALGORITHMS COMPARATIVE STUDY OF IMAGE EDGE DETECTION ALGORITHMS Shubham Saini 1, Bhavesh Kasliwal 2, Shraey Bhatia 3 1 Student, School of Computing Science and Engineering, Vellore Institute of Technology, India,

More information

An edge is not a line... Edge Detection. Finding lines in an image. Finding lines in an image. How can we detect lines?

An edge is not a line... Edge Detection. Finding lines in an image. Finding lines in an image. How can we detect lines? Edge Detection An edge is not a line... original image Cann edge detector Compute image derivatives if gradient magnitude > τ and the value is a local maimum along gradient direction piel is an edge candidate

More information

Recognizing Deformable Shapes. Salvador Ruiz Correa Ph.D. UW EE

Recognizing Deformable Shapes. Salvador Ruiz Correa Ph.D. UW EE Recognizing Deformable Shapes Salvador Ruiz Correa Ph.D. UW EE Input 3-D Object Goal We are interested in developing algorithms for recognizing and classifying deformable object shapes from range data.

More information

Lecture 21: Shading. put your trust in my shadow. Judges 9:15

Lecture 21: Shading. put your trust in my shadow. Judges 9:15 Lecture 21: Shading put your trust in my shadow. Judges 9:15 1. Polygonal Models Polygonal models are one of the most common representations for geometry in Computer Graphics. Polygonal models are popular

More information

Visuelle Perzeption für Mensch- Maschine Schnittstellen

Visuelle Perzeption für Mensch- Maschine Schnittstellen Visuelle Perzeption für Mensch- Maschine Schnittstellen Vorlesung, WS 2009 Prof. Dr. Rainer Stiefelhagen Dr. Edgar Seemann Institut für Anthropomatik Universität Karlsruhe (TH) http://cvhci.ira.uka.de

More information

3D Reconstruction Of Occluded Objects From Multiple Views

3D Reconstruction Of Occluded Objects From Multiple Views 3D Reconstruction Of Occluded Objects From Multiple Views Cong Qiaoben Stanford University Dai Shen Stanford University Kaidi Yan Stanford University Chenye Zhu Stanford University Abstract In this paper

More information

Patch-based Object Recognition. Basic Idea

Patch-based Object Recognition. Basic Idea Patch-based Object Recognition 1! Basic Idea Determine interest points in image Determine local image properties around interest points Use local image properties for object classification Example: Interest

More information

CS 231A Computer Vision (Winter 2014) Problem Set 3

CS 231A Computer Vision (Winter 2014) Problem Set 3 CS 231A Computer Vision (Winter 2014) Problem Set 3 Due: Feb. 18 th, 2015 (11:59pm) 1 Single Object Recognition Via SIFT (45 points) In his 2004 SIFT paper, David Lowe demonstrates impressive object recognition

More information

High-Level Computer Vision

High-Level Computer Vision High-Level Computer Vision Detection of classes of objects (faces, motorbikes, trees, cheetahs) in images Recognition of specific objects such as George Bush or machine part #45732 Classification of images

More information

Detecting Ellipses via Bounding Boxes

Detecting Ellipses via Bounding Boxes Detecting Ellipses via Bounding Boxes CHUN-MING CHANG * Department of Information and Design, Asia University, Taiwan ABSTRACT A novel algorithm for ellipse detection based on bounding boxes is proposed

More information

Elaborazione delle Immagini Informazione Multimediale. Raffaella Lanzarotti

Elaborazione delle Immagini Informazione Multimediale. Raffaella Lanzarotti Elaborazione delle Immagini Informazione Multimediale Raffaella Lanzarotti HOUGH TRANSFORM Paragraph 4.3.2 of the book at link: szeliski.org/book/drafts/szeliskibook_20100903_draft.pdf Thanks to Kristen

More information

3D Shape Recovery of Smooth Surfaces: Dropping the Fixed Viewpoint Assumption

3D Shape Recovery of Smooth Surfaces: Dropping the Fixed Viewpoint Assumption IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL., NO., 1 3D Shape Recovery of Smooth Surfaces: Dropping the Fixed Viewpoint Assumption Yael Moses Member, IEEE and Ilan Shimshoni Member,

More information