Detecting Bilateral Symmetry in Perspective

Size: px
Start display at page:

Download "Detecting Bilateral Symmetry in Perspective"

Transcription

1 Detecting Bilateral Symmetry in Perspective Hugo Cornelius and Gareth Loy Computational Vision & Active Perception Laboratory Royal Institute of Technology (KTH), Stockholm, Sweden Abstract A method is presented for efficiently detecting bilateral symmetry on planar surfaces under perspective projection. The method is able to detect local or global symmetries, locate symmetric surfaces in complex backgrounds, and detect multiple incidences of symmetry. Symmetry is simultaneously evaluated across all locations, scales, orientations and under perspective skew. Feature descriptors robust to local affine distortion are used to match pairs of symmetric features. Feature quadruplets are then formed from these symmetric feature pairs. Each quadruplet hypothesises a locally planar 3D symmetry that can be extracted under perspective distortion. The method is posed independently of a specific feature detector or descriptor. Results are presented demonstrating the efficacy of the method for detecting bilateral symmetry under perspective distortion. Our unoptimised Matlab implementation, running on a standard PC, requires of the order of 2 seconds to process images with 1, feature points. 1. Introduction We live in a 3D world full of symmetries. The most familiar form of symmetry, bilateral or mirror symmetry, is common among many manufactured objects and living organisms. It is this form of symmetry that will be the focus of this paper. Humans are very good at detecting symmetry. Psychophysical evidence points to symmetry detection being a pre-attentive process [2] and also one of the cues directing visual attention [2, 9, 14, 25]. Wagemans [31] views symmetry as one of the most important aspects of early visual analysis, and recent psychophsyical results [3] suggest that the detection of symmetries under perspective distortion is an integral part of 3-D object perception. Symmetry gives balance and form to an object s structure and appearance, but it also ties together visual features that can otherwise appear diffuse. Accordingly, features on symmetric objects can be grouped together on the basis of their common symmetry. Whilst objects exhibiting symmetry will often not appear symmetric in the image plane, their symmetry may still be evident under affine or perspective projection. The contribution of this paper is a new and efficient method for detecting planar bilateral symmetry under affine and perspective projection, and grouping the associated symmetric constellations of features. Modern feature descriptors [2] are used to establish pairs of reflective features, and these are in turn paired to form quadruplets. The potential symmetries of each of these quadruplets are determined, and they are grouped into symmetric constellations about common symmetry foci, identifying both the dominant bilateral symmetries present and a set of symmetric features associated with each foci. The method is independent of the feature detector and descriptor used. Symmetry is simultaneously evaluated across all locations, scales and orientations, and also across a range of affine or perspective distortions limited only by the matching capacity of the features used. The method is able to detect local or global symmetries, locate symmetric surfaces in complex backgrounds, and detect multiple incidences of symmetry. This is all done from a single view and requires no reconstruction of the 3D scene. The remainder of this paper is organised as follows, Section 2 reviews previous work, Section 3 describes the method, Section 4 presents experimental results and discusses the performance of the method, and Section 5 presents our conclusions. 2. Background Symmetry detection has a long history in computer vision dating back to the 197s (e.g. [3, 17, 19, 24, 36]). It has found use in numerous applications, including: facial image analysis [23], vehicle detection [11, 37], reconstruction [7, 32], visual attention [17, 24, 25], indexing of image databases [26], classifying regular patterns [13], completion of occluded shapes [35], object detection [19, 36] and detecting tumours in medical imaging [18]. Existing symmetry detection techniques can be broadly classified into global and local feature-based methods.

2 Global approaches treat the entire image as a signal from which symmetric properties are inferred, often via frequency analysis. Examples include the work of Marola [19] who proposed a method for detecting symmetry in images where the axis of symmetry intersects or passes near the centroid, Keller and Shkolnisky [1] who took an algebraic approach and employed Fourier analysis to detect symmetry, and Sun [27] who showed that the orientation of the dominant bilateral symmetry axis could be computed from the histogram of gradient orientations. The main drawbacks of these global approaches are their inability to detect multiple incidences of symmetry and the adverse effect of background structure on the result. Local feature-based methods use local features, edges, contours or boundary points, to reduce the problem to one of grouping symmetric sets of points or lines. The standard way to accumulate symmetry hypotheses from feature information is to use the Hough transform. Cham and Cipolla [1] detected skew symmetry in 2D edge maps using a modified Hough transform. Yiwa and Cheong [34] also detected skew symmetry in 2D edge maps using a Hough based approach. They used a 3D Hough space and de-skewed the detected symmetric regions. Yip [33] demonstrated a Hough transform based approach for detecting symmetry in artificial images. In 23 Tuytelaars et al. [29] presented impressive results detecting regular repetitions of planar patterns under perspective skew using a geometric framework. The approach detected all planar homologies 1 and could thus find reflections about a point, periodicities, and mirror symmetries. The method built clusters of matching points using a cascaded Hough transform, and typically took around 1 minutes to process an image. In contrast the new method proposed herein provides a simpler and more efficient approach. We utilise local feature information to establish robust symmetric matches on a feature level, and merge these into quadruplets which then vote for single symmetry hypotheses. Lazebnik et al. [12] also noted that affine invariant clusters of features could be matched within an image to detect symmetries under affine projection. The method used rotationally invariant descriptors (providing no orientation information) which restricted this, like [29], to a cluster-based approach. In comparison, our method uses orientation information to provide a more robust assessment of symmetry both between features pairs, and within groups of features. By assembling quadruplets from pairs of symmetric of features, we are able to detect symmetry not only under affine, but also perspective projection. Recently a new method [16] was proposed for grouping 1 A plane projective transformation is a planar homology if it has a line of fixed points (called the axis), together with a fixed point (called the vertex) not on the line [6]. symmetric constellations of features and detecting symmetry in the image plane. Pairs of symmetrically matching features were computed and each pair hypothesised either a bilateral symmetry axis or a centre of rotational symmetry. The pairs were grouped into symmetric constellations of features about common symmetry foci, identifying the dominant symmetries present in the image plane. Here we take the concept of matching reflective pairs of features proposed by [16] and extend this to detect planar bilateral symmetries under affine or perspective skew. Detecting symmetry under perspective skew requires using feature quadruplets rather than just pairs, and estimating vanishing points to assess local symmetries. The directions of the features are used to verify that the pairs of feature pairs in a quadruplet have a common axis of symmetry. This paper presents a generic approach to detecting planar bilateral symmetry in images, and grouping features associated with each instance of symmetry. 3. Detecting Planar Symmetry in Perspective In general, objects and thus symmetric surfaces are observed in perspective. Symmetry in the image plane is a subset of the perspective case, and affine, or weak perspective, provides a close approximation to perspective when the distance from the camera is much greater than the change of depth across the object. The solution presented here for detecting symmetry in perspective will also detect symmetry in the image plane and affine symmetries. However, if it is known a priori that the symmetry will be observed either in the image plane, or under approximately affine viewing conditions, then the method can be constrained to take account of this additional information yielding a more computationally efficient method. See Section 3.3 for the affine case and [16] for symmetry in the image plane. Our method is based on matching symmetric pairs of features that are then grouped into pairs-of-pairs to form feature quadruplets. Each quadruplet specifies a particular axis of symmetry, and we group quadruplets hypothesising common symmetry axes in the image. For a quadruplet to correctly specify a symmetry axis the 3D locations of the features must be close to coplanar, and both reflective pairs must share the same axis of symmetry. A number of image-based constraints are placed on the quadruplets to reject cases that cannot satisfy these conditions. These image-based constraints are necessary, but not sufficient, so we rely on the consistency of correct symmetry hypotheses dominating the noise of incorrect hypotheses, to reveal the presence of true 3D symmetries in the image. The remainder of this section discusses the details of this procedure for detecting planar bilateral symmetry under perspective skew.

3 3.1. Defining Feature Points A set of feature points p i is determined using any method (e.g. [22]) that offers robustness to affine deformation, and detects distinctive points with good repeatability. Next a feature descriptor k i is generated for each feature point, encoding the local appearance of the feature after it has been normalised with respect to rotation, scale and affine distortion. Any feature descriptor suitable for matching can be used as long as it provides a direction for each feature point. See [21] for a review of leading techniques. The experiments in this paper use the SIFT descriptor [15], which gives k i as a 128 element vector. Our method utilises the orientation of features when assessing symmetry. Affine invariant features typically define elliptical feature regions that are warped to circular regions before the feature descriptor is computed. The feature descriptor computes a feature s orientation after this warping. To obtain the true directions of the features in the image we must reverse the effect of this warping as shown in Fig. 1. Figure 1. An affine-invariant feature region is warped to a circle before its descriptor is computed. The feature orientation must be unwarped to give the true orientation of the feature in the image. A set of mirrored feature descriptors m i is also generated. Here m i is the feature descriptor generated from a mirrored copy of the image patch associated with feature k i. The choice of mirroring axis is arbitrary owing to the orientation normalisation in the generation of the descriptor. The mirrored feature descriptors m i can be directly computed from the reflected local image patch or depending on the construction of the feature descriptor being used determined directly from k i [16]. Matches are sort between the features k i and the mirrored features m j to form a set of (p i, p j ) pairs of potentially symmetric features Symmetric Quadruplets After forming reflective pairs the algorithm searches for symmetric quadruplets. These quadruplets are formed by grouping two pairs of reflective features, and each quadruplet hypothesises an axis of symmetry. Fig. 2 shows a schematic of a quadruplet of features made up of two reflective feature pairs (p 1, p 2 ) and (q 1, q 2 ) lying in the same plane and sharing the same symmetry axis. Such a quadruplet of non-colinear reflective pairs specifies a precise symmetry hypothesis that can be determined via a simple geometric construction. That is, find the intersection points c 1 and c 2 of the lines linking the non-paired members of the quadruplet, and the line linking these points will be the symmetry axis. An alternative is to use the vanishing point v of the lines linking the reflective pairs, and compute the 3D midpoints of these lines v 1 and v 2 using projective geometry. The image co-ordinates of the 3D midpoint v 1 are given by v 1 = d2 p 1 + d1 p 2 (1) d 1 + d 2 d 1 + d 2 where d i is the distance from v to p i in the image plane. The 3D midpoint of the other pair v 2 is determined in a similar manner, and together they define the axis of symmetry. For a quadruplet to be useful for discerning a true instance of symmetry the 3D locations of its four features must be close to coplanar and share a common symmetry axis. For a pair of feature pairs, we cannot know a priori if this is the case, however, we can enforce a number of conditions on each quadruplet necessary for this to hold. The conditions we require a quadruplet to satisfy are: 1. the quadruplet must be convex, 2. lines joining reflective pairs must not intersect, 3. the directions of the features must be consistent with the axis of symmetry estimated from the quadruplet, and 4. the relative difference in scale within each pair must not exceed k. The first three conditions are necessary for a quadruplet to represent a local 3D planar symmetry. The third condition is satisfied if the features within each pair are symmetrically oriented towards or away from each other, such that they either (a) define lines that intersect on or close to the axis of symmetry (as illustrated in Fig. 2), or (b) are close to parallel to the line joining the pair. The final condition restricts the scale change across the symmetric surface. Mikolajczyk et al. [22] report a sharp drop off of feature detector performance in structured scenes where features have changed scale by a factor of two or greater. As our method relies on accurate and robust feature performance, we can discount quadruplets that hypothesise a scale change of a factor two or more within a reflective pair. The relative scale of a pair is directly proportional to the ratio of the distances of each feature to the vanishing point, i.e. where s i is the scale of the i th feature and d i is the distance of p i from the vanishing point v in Fig. 2. For each reflective feature pair, we form n quadruplets by pairing it with the n closest neighbouring pairs in the image that are at least m pixels a part (for our experiments we chose m as 5 pixels and n = 15). Forming quadruplets from pairs in close proximity increases the likelihood s 1 s 2 = d1 d 2

4 symmetry plane v symmetry axis q 1 c 2! p 1 v 2 c 1 v 1 q 2 p 2 feature plane Figure 2. A schematic of a quadruplet of features made up of two reflective feature pairs (p 1, p 2) and (q 1, q 2) sharing the same symmetry axis, showing the vanishing point v and the geometric constructions that give the symmetry axis. that these pairs will be associated with the same instance of symmetry. However, pairs that are too close together will not give a reliable symmetry estimate. We test if the quadruplets satisfy the above conditions, and those that do not are discarded. Using the procedure described above means that we get up to n quadruplets per symmetric feature pair. Incorrectly matched feature pairs that do not lie on a symmetric surface will in general give no, or very few quadruplets, and the total number of incorrect quadruplets in an image will usually be small. The axes of symmetry estimated from the quadruplets can then be represented using the standard rθ polar coordinate parameterisation for a line r = x cos θ + y sin θ where (x, y) are the image centred co-ordinates of any point on the line (such as one of the 3D mid-points v i ) and θ is the angle perpendicular to the direction of the axis of symmetry. The linear Hough transform is applied to determine the dominant symmetry axes. Each quadruplet casts a single vote in rθ Hough space. The resulting Hough space is blurred with a Gaussian and the maxima extracted as potential symmetry axes. The quadruplets voting in the neighbourhood of a maximum are grouped together. Strictly speaking, for these groups of symmetric feature pairs sharing the same axis of symmetry to be consistent with an instance of planar bilateral symmetry the lines joining all pairs must have a common vanishing point. However, in practice noise in the feature locations and a desire to tolerate objects whose symmetry is not absolutely perfect, means that the vanishing points will be dispersed. Feature pairs close to each other will always be close to parallel, and when the vanishing point approaches infinity, this will be true for all pairs of pairs. If quadruplets with feature pairs that are close to parallel are used to estimate the vanishing point, small changes in the feature locations will lead to large changes of the position of the vanishing point. However, the position of the vanishing point will only change along the directions of the lines joining the features in the pairs, and these lines should still point towards one common vanishing point. We can therefore measure the consistency of the feature pairs by comparing the directions of the lines joining the features in the pairs. We choose a point on the axis of symmetry, and for each feature pair, we measure its distance to this point. We also calculate the tangent of the angle between the line connecting the features in a pair and the axis of symmetry. If the lines between the pairs have a common vanishing point, the tangent of these angles should vary linearly with the distance to the chosen point on the symmetry axis. This linear relationship can be robustly estimated using RANSAC [5]. This procedure greatly refines our grouping, giving our final estimate of the symmetry axis and its associated features Affine Assumption Pairs of symmetric features sharing a common symmetry plane will be joined by lines that are parallel in 3D. Under affine projection these lines will also appear parallel in the image. This greatly simplifies the symmetry detection process. Feature pairs with a common axis of symmetry are parallel, and we know that the symmetry axis intersects the midpoints of the lines joining the pairs. The direction of the axis of symmetry can be found using the directions of the features, since the intersection of lines through the feature positions and parallel to the feature directions has to be on the axis of symmetry. The affine detection process is as follows, first we determine the angle ψ of the line joining each pair. The pairs are then grouped according to their ψ angle, and for each feature pair, an axis of symmetry is estimated using the midpoint between the features and the intersection of the directions of the features. For groups of feature pairs with common ψ, dominant symmetry axes are found via the Hough transform, with each feature pair in the group voting for one orientation and location. The resulting feature constellations will always have consistent direction and thus consistent vanishing points (at infinity) Using Feature Scale In theory the scale of reflectively matched pairs of features could also be used. Using the scale of a pair of matched

5 features, the 3D midpoint between the features can be estimated directly, without having to form a quadruplet. However, in practice inaccuracies in the scale information obtained by current affine-invariant feature techniques make this approach unfeasible. 4. Performance The method was implemented in Matlab, and feature points were detected and descriptors computed using code made available by the authors of these methods [15, 21, 22]. We tested our algorithm using several different feature detection methods: hessian-affine and Harris affine detectors [2], an intensity extrema-based detector (IBR), and an edge based region detector (EBR) [28]. The SIFT descriptor was used in all experiments. Fig. 3 shows the results of our algorithm on natural images. The dominant symmetry is indicated together with other symmetries with at least half as much support as the dominant axis. The results demonstrate the method s ability to detect multiple instances of symmetry and detect symmetry planes in the presence of clutter and partial occlusion. To quantitatively evaluate the algorithm s ability to detect symmetry under different amounts of perspective skew and in the presence of noise we generated a set of 1,3 symmetric images with known ground truth. We gathered 3 natural images, not containing any obvious symmetries. Each image was concatenated with a mirrored version of itself forming a perfectly symmetric image. These symmetric images were then skewed using a homography with the following effect: The image was rotated a random amount, and then tilted around an axis in the image plane passing through the image centre. The tilted image was viewed through a pinhole camera positioned a distance d away, and pointing towards, the image centre. The parameter d was set equal to the longest side of the image. Gray columns or rows were added to the right side or bottom of the image giving approximately 6% non-symmetric background in each image. Finally Gaussian noise was added to the image. For each of the 3 images gathered 45 test images were generated, showing symmetry with perspective tilt from to 8 degrees and Gaussian noise with standard deviation σ of, 25,, 75 and 1 (the graylevels of the images ranged between and 255). The harris-affine feature detector was used throughout the experiment, and the images were scaled so that just over 1, features were detected in every image. Fig. 4 shows examples of two test images and the resulting detected symmetries. The results of the test are shown in Fig. 5. The first diagram shows the percentage of correctly detected symmetry axes for all different tilts and noise levels. The second diagram shows the mean number of feature pairs supporting the correct symmetry hypothesis (when it was detected). (a) (b) Figure 4. Sample images from our test database, (a) tilt 6o no noise, (b) tilt 6o and Gaussian noise σ = 75. Note that there are approximately 1, feature points in each test image. With no noise or tilt we see close to perfect matching between mirrored pairs of features giving a little less than feature pairs supporting the correct hypothesis. As the tilt of the symmetric surface, or the noise level, increases we cannot expect to detect identical features in both parts of the symmetric image, and the number of feature pairs that agree with the correct axis of symmetry decreases. However, the correct axis of symmetry is still found in almost all images for tilts up to 6 degrees and Gaussian noise with standard deviation up to 75. Even though only a small part of the total number of feature pairs formed agrees with the correct axis of symmetry, we are still able to detect it. If the same experimets are run using the affine, or 2D symmetry detector instead of the perspective, the correct symmetry axes are found in the un-tilted images, With 1 degrees tilt, the methods still perform reasonably well, detecting, around 6% of the symmetry axes, but as the tilt increases, the performances of both methods rapidly decrease. This is expected since the perspective effects are quite strong in the images, and the symmetries cannot be be well modeled using the affine assumption. To assess the effect of clutter in the background we repeated a subset of the above experiments using a random selection one of the 3 images as background as shown in Fig. 6. The results from these tests are shown in Fig. 7. As can be seen, the textured backgrounds have little effect on the results.

6 (a) (d) (b) (e) (c) (f) Figure 3. Symmetry detected in perspective images, showing reflective matches and axes of symmetry detected. The algorithm s performance relies heavily on the matching capability of the feature method used, and it performs best when a large number of features are detected on a symmetric surface. The basic mechanism of matching instances of texture that are locally symmetric under some affine transformation performs well for strongly textured surfaces that are close to planar, such as the carpets in Fig. 3. Less textured surfaces such as the car or the phone generate less matches and are not as strongly detected. This predisposition towards textured surfaces is a result of the feature type chosen, our algorithm could equally well be applied to features from a shape-based descriptor such as [8]. A substantial amount of time consumed when detecting symmetry is used computing features and performing the matching. The analysis of the feature pairs, formation of quadruplets, grouping by symmetry axis and vanishing points, and final assessment of symmetry takes approximately 2 seconds for images with 1, feature points, running unoptimised Matlab code on a 2.8 GHz Pentium 4. In this scenario, using Lowe s pre-compiled SIFT code and a naive Matlab matching routine the time for computing features and performing matching was approximately 1 seconds. The results demonstrate the suitability of this approach for detecting planar bilateral symmetries under affine or per- spective distortion. Further work will consider segmenting symmetric regions, grouping local planar symmetries with a common symmetry plane, and detecting non-coplanar symmetry planes. The groups of features associated with each symmetry axis provide a good starting point for segmenting the symmetric region. Additional feature points could be generated in the vicinity of the symmetric matches, to grow symmetric regions possibly in a similar fashion to Ferrari et al. s object segmentation approach [4]. Segmenting the symmetric regions would also provide a more accurate measure of the extent of the axes of symmetry in the image, and allow a more rigourous verification of the correctness of each proposed symmetry axis. 5. Conclusions This paper has presented a method for detecting bilateral planar symmetry in images under perspective or affine distortion. The method is feature-based and relies on robust, repeatable matches under local affine distortion. It is posed independently of a specific feature detector or descriptor method, and provides a generic means of grouping features that are likely to correspond to an instance of planar symmetry in 3D. Hypotheses are built from quadruplets made of pairs of reflective feature matches and the method is thus able to consider symmetry at all locations, scales, and ori-

7 σ= 7 Percentage of correct detections Percentage of correct detections 8 σ = 25 σ = 6 σ = 75 σ = σ= σ = Tilt angle (a) 4 Tilt angle (a) 3 σ= σ = 25 σ = σ = 75 σ = σ= σ = 2 Number of feature pairs Number of feature pairs Tilt angle Tilt angle (b) (b) Figure 5. Results of test on 1,3 images for different noise levels σ. (a) Percentage of tests where correct symmetry axis was found, (b) number of pairs found supporting correct axis. Figure 7. Results of test with cluttered background on 54 images for two different noise levels σ. (a) Percentage of tests where correct symmetry axis was found, (b) number of pairs found supporting correct axis. thesised images with varying amounts of noise and perspective distortion. References Figure 6. A sample test image with a cluttered background, tilt 6o and Gaussian noise σ =. entations in the presence of perspective or affine distortion. The orientation of the features are used to verify the symmetry of each quadruplet and improve the robustness of the symmetry axis estimate. If it is known a priori that an affine assumption is appropriate then, using the orientations of the features makes it possible to estimate the axis of symmetry for each feature pair directly, without having to form quadruplets, giving increased computational efficiency under affine conditions. The algorithm has been employed in conjunction with different feature detectors and has been shown to successfully detect symmetry in natural images and its performance has been quantified on over 1, syn- [1] T.-J. Cham and R. Cipolla. Symmetry detection through local skewed symmetries. Image and Vision Computing, 13(5), [2] S. C. Dakin and A. M. Herbert. The spatial region of integration for visual symmetry detection. Proc of the Royal Society London B. Bio Sci, 265: , [3] L. S. Davis. Understanding shape, ii: Symmetry. SMC, 7:24 212, [4] V. Ferrari, T. Tuytelaars, and L. V. Gool. Simultaneous object recognition and segmentation from single or multiple model views. International Journal of Computer Vision, [5] M. Fischler and R. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM, 24(6): , June [6] R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, ISBN: , second edition, 24. 2

8 [7] W. Hong, A. Y. Yang, K. Huang, and Y. Ma. On symmetry and multiple-view geometry: Structure, pose, and calibration from a single image. International Journal of Computer Vision, [8] F. Jurie and C. Schmid. Scale-invariant shape features for recognition of object categories. In CVPR, [9] L. Kaufman and W. Richards. Spontaneous fixation tendencies of visual forms. Perception and Psychophysics, 5(2):85 88, [1] Y. Keller and Y. Shkolnisky. An algebraic approach to symmetry detection. In ICPR (3), pages , [11] A. Kuehnle. Symmetry-based recognition of vehicle rears. Pattern Recognition Letters, 12(4): , [12] S. Lazebnik, C. Schmid, and J. Ponce. Semi-local affine parts for object recognition. In BMVC, [13] Y. Liu and R. Collins. Skewed symmetry groups. In IEEE Conference on Computer Vision and Pattern Recognition, December [14] P. Locher and C. Nodine. Symmetry catches the eye. In J. O Regan and A. Lévy-Schoen, editors, Eye Movements: from physiology to cognition. Elsevier, [15] D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 6(2):91 11, 24. 3, 5 [16] G. Loy and J.-O. Eklundh. Detecting symmetry and symmetric constellations of features. In ECCV, 26. 2, 3 [17] G. Loy and A. Zelinsky. Fast radial symmetry for detecting points of interest. IEEE Trans Pattern Recognition and Machine Intelligence, 25(8): , August [18] M. Mancas, B. Gosselin, and B. Macq. Fast and automatic tumoral area localisation using symmetry. In Proc. of the IEEE ICASSP Conference, [19] G. Marola. On the detection of the axes of symmetry of symmetric and almost symmetric planar images. IEEE Trans Pattern Recognition and Machine Intelligence, 11(1):14 18, January , 2 [2] K. Mikolajczyk and C. Schmid. Scale & affine invariant interest point detectors. International Journal of Computer Vision, 6(1):63 86, 24. 1, 5 [21] K. Mikolajczyk and C. Schmid. A performance evaluation of local descriptors. IEEE Trans Pattern Recognition and Machine Intelligence, pages , October 25. 3, 5 [22] K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L. V. Gool. A comparison of affine region detectors. International Journal of Computer Vision, 26. 3, 5 [23] S. Mitra and Y. Liu. Local facial asymmetry for expression classification. In CVPR, [24] D. Reisfeld, H. Wolfson, and Y. Yeshurun. Context free attentional operators: the generalized symmetry transform. International Journal of Computer Vision, 14(2):119 13, [25] G. Sela and M. D. Levine. Real-time attention for robotic vision. Real-Time Imaging, 3: , [26] D. Sharvit, J. Chan, H. Tek, and B. B. Kimia. Symmetrybased indexing of image databases. In Proc. IEEE Workshop on Content-Based Access of Image and Video Libraries, [27] C. Sun and D. Si. Fast reflectional symmetry detection using orientation histograms. Journal of Real Time Imaging, 5(1):63 74, February [28] T. Tuytelaars and L. V. Gool. Matching widely separated views based on affine invariant regions. Int. J. Comput. Vision, 59(1):61 85, [29] T. Tuytelaars, A. Turina, and L. J. V. Gool. Noncombinatorial detection of regular repetitions under perspective skew. IEEE Trans Pattern Recognition and Machine Intelligence, 25(4): , [3] G. van der Vloed, A. Csath ø, and P. A. van der Helm. Symmetry and repetition in perspective. Acta Psychologica, 12:74 92., [31] J. Wagemans. Detection of visual symmetries. Spatial Vision, 9(1):9 32, [32] A. Yang, S. Rao, K. Huang, W. Hong, and Y. Ma. Geometric segmentation of perspective images based on symmetry groups. In ICCV, pages , [33] R. Yip. A hough transform technique for the detection of reflectional symmetry and skew-symmetry. Pattern Recognition Letters, 21(2):117 13, February 2. 2 [34] L. Yiwu and W. K. Cheong. A novel method for detecting and localising of reflectional and rotational symmetry under weak perspective projection. In ICPR, [35] H. Zabrodsky, S. Peleg, and D. Avnir. Completion of occluded shapes using symmetry. In CVPR, pages , [36] H. Zabrodsky, S. Peleg, and D. Avnir. Symmetry as a continuous feature. IEEE Trans Pattern Recognition and Machine Intelligence, 17(12): , December [37] T. Zielke, M. Brauckmann, and W. von Seelen. Intensity and edge-based symmetry detection with an application to carfollowing. CVGIP: Image Underst., 58(2):177 19,

Efficient Symmetry Detection Using Local Affine Frames

Efficient Symmetry Detection Using Local Affine Frames Efficient Symmetry Detection Using Local Affine Frames Hugo Cornelius 1,MichalPerd och 2,Jiří Matas 2,andGarethLoy 1 1 CVAP, Royal Institute of Technology, Stockholm, Sweden 2 Center for Machine Perception,

More information

Detecting Multiple Symmetries with Extended SIFT

Detecting Multiple Symmetries with Extended SIFT 1 Detecting Multiple Symmetries with Extended SIFT 2 3 Anonymous ACCV submission Paper ID 388 4 5 6 7 8 9 10 11 12 13 14 15 16 Abstract. This paper describes an effective method for detecting multiple

More information

Evaluation and comparison of interest points/regions

Evaluation and comparison of interest points/regions Introduction Evaluation and comparison of interest points/regions Quantitative evaluation of interest point/region detectors points / regions at the same relative location and area Repeatability rate :

More information

Instance-level recognition part 2

Instance-level recognition part 2 Visual Recognition and Machine Learning Summer School Paris 2011 Instance-level recognition part 2 Josef Sivic http://www.di.ens.fr/~josef INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire d Informatique,

More information

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM Karthik Krish Stuart Heinrich Wesley E. Snyder Halil Cakir Siamak Khorram North Carolina State University Raleigh, 27695 kkrish@ncsu.edu sbheinri@ncsu.edu

More information

Instance-level recognition II.

Instance-level recognition II. Reconnaissance d objets et vision artificielle 2010 Instance-level recognition II. Josef Sivic http://www.di.ens.fr/~josef INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire d Informatique, Ecole Normale

More information

Viewpoint Invariant Features from Single Images Using 3D Geometry

Viewpoint Invariant Features from Single Images Using 3D Geometry Viewpoint Invariant Features from Single Images Using 3D Geometry Yanpeng Cao and John McDonald Department of Computer Science National University of Ireland, Maynooth, Ireland {y.cao,johnmcd}@cs.nuim.ie

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Video Google: A Text Retrieval Approach to Object Matching in Videos

Video Google: A Text Retrieval Approach to Object Matching in Videos Video Google: A Text Retrieval Approach to Object Matching in Videos Josef Sivic, Frederik Schaffalitzky, Andrew Zisserman Visual Geometry Group University of Oxford The vision Enable video, e.g. a feature

More information

DETECTION OF REPEATED STRUCTURES IN FACADE IMAGES

DETECTION OF REPEATED STRUCTURES IN FACADE IMAGES DETECTION OF REPEATED STRUCTURES IN FACADE IMAGES S. Wenzel, M. Drauschke, W. Förstner Department of Photogrammetry, Institute of Geodesy and Geoinformation, University of Bonn, Nussallee 15, 53115 Bonn,

More information

Combining Appearance and Topology for Wide

Combining Appearance and Topology for Wide Combining Appearance and Topology for Wide Baseline Matching Dennis Tell and Stefan Carlsson Presented by: Josh Wills Image Point Correspondences Critical foundation for many vision applications 3-D reconstruction,

More information

Specular 3D Object Tracking by View Generative Learning

Specular 3D Object Tracking by View Generative Learning Specular 3D Object Tracking by View Generative Learning Yukiko Shinozuka, Francois de Sorbier and Hideo Saito Keio University 3-14-1 Hiyoshi, Kohoku-ku 223-8522 Yokohama, Japan shinozuka@hvrl.ics.keio.ac.jp

More information

Recognition (Part 4) Introduction to Computer Vision CSE 152 Lecture 17

Recognition (Part 4) Introduction to Computer Vision CSE 152 Lecture 17 Recognition (Part 4) CSE 152 Lecture 17 Announcements Homework 5 is due June 9, 11:59 PM Reading: Chapter 15: Learning to Classify Chapter 16: Classifying Images Chapter 17: Detecting Objects in Images

More information

Patch-based Object Recognition. Basic Idea

Patch-based Object Recognition. Basic Idea Patch-based Object Recognition 1! Basic Idea Determine interest points in image Determine local image properties around interest points Use local image properties for object classification Example: Interest

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Announcements. Recognition (Part 3) Model-Based Vision. A Rough Recognition Spectrum. Pose consistency. Recognition by Hypothesize and Test

Announcements. Recognition (Part 3) Model-Based Vision. A Rough Recognition Spectrum. Pose consistency. Recognition by Hypothesize and Test Announcements (Part 3) CSE 152 Lecture 16 Homework 3 is due today, 11:59 PM Homework 4 will be assigned today Due Sat, Jun 4, 11:59 PM Reading: Chapter 15: Learning to Classify Chapter 16: Classifying

More information

Computer Vision for HCI. Topics of This Lecture

Computer Vision for HCI. Topics of This Lecture Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi

More information

Fitting: The Hough transform

Fitting: The Hough transform Fitting: The Hough transform Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not vote consistently for any single model Missing data

More information

Simultaneous Recognition and Homography Extraction of Local Patches with a Simple Linear Classifier

Simultaneous Recognition and Homography Extraction of Local Patches with a Simple Linear Classifier Simultaneous Recognition and Homography Extraction of Local Patches with a Simple Linear Classifier Stefan Hinterstoisser 1, Selim Benhimane 1, Vincent Lepetit 2, Pascal Fua 2, Nassir Navab 1 1 Department

More information

Invariant Features from Interest Point Groups

Invariant Features from Interest Point Groups Invariant Features from Interest Point Groups Matthew Brown and David Lowe {mbrown lowe}@cs.ubc.ca Department of Computer Science, University of British Columbia, Vancouver, Canada. Abstract This paper

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Extrinsic camera calibration method and its performance evaluation Jacek Komorowski 1 and Przemyslaw Rokita 2 arxiv:1809.11073v1 [cs.cv] 28 Sep 2018 1 Maria Curie Sklodowska University Lublin, Poland jacek.komorowski@gmail.com

More information

Shape recognition with edge-based features

Shape recognition with edge-based features Shape recognition with edge-based features K. Mikolajczyk A. Zisserman C. Schmid Dept. of Engineering Science Dept. of Engineering Science INRIA Rhône-Alpes Oxford, OX1 3PJ Oxford, OX1 3PJ 38330 Montbonnot

More information

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale. Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe presented by, Sudheendra Invariance Intensity Scale Rotation Affine View point Introduction Introduction SIFT (Scale Invariant Feature

More information

Structure Guided Salient Region Detector

Structure Guided Salient Region Detector Structure Guided Salient Region Detector Shufei Fan, Frank Ferrie Center for Intelligent Machines McGill University Montréal H3A2A7, Canada Abstract This paper presents a novel method for detection of

More information

Visualization 2D-to-3D Photo Rendering for 3D Displays

Visualization 2D-to-3D Photo Rendering for 3D Displays Visualization 2D-to-3D Photo Rendering for 3D Displays Sumit K Chauhan 1, Divyesh R Bajpai 2, Vatsal H Shah 3 1 Information Technology, Birla Vishvakarma mahavidhyalaya,sumitskc51@gmail.com 2 Information

More information

Prof. Feng Liu. Spring /26/2017

Prof. Feng Liu. Spring /26/2017 Prof. Feng Liu Spring 2017 http://www.cs.pdx.edu/~fliu/courses/cs510/ 04/26/2017 Last Time Re-lighting HDR 2 Today Panorama Overview Feature detection Mid-term project presentation Not real mid-term 6

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Feature Based Registration - Image Alignment

Feature Based Registration - Image Alignment Feature Based Registration - Image Alignment Image Registration Image registration is the process of estimating an optimal transformation between two or more images. Many slides from Alexei Efros http://graphics.cs.cmu.edu/courses/15-463/2007_fall/463.html

More information

Object Recognition with Invariant Features

Object Recognition with Invariant Features Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user

More information

Instance-level recognition

Instance-level recognition Instance-level recognition 1) Local invariant features 2) Matching and recognition with local features 3) Efficient visual search 4) Very large scale indexing Matching of descriptors Matching and 3D reconstruction

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1 Feature Detection Raul Queiroz Feitosa 3/30/2017 Feature Detection 1 Objetive This chapter discusses the correspondence problem and presents approaches to solve it. 3/30/2017 Feature Detection 2 Outline

More information

Lucas-Kanade Scale Invariant Feature Transform for Uncontrolled Viewpoint Face Recognition

Lucas-Kanade Scale Invariant Feature Transform for Uncontrolled Viewpoint Face Recognition Lucas-Kanade Scale Invariant Feature Transform for Uncontrolled Viewpoint Face Recognition Yongbin Gao 1, Hyo Jong Lee 1, 2 1 Division of Computer Science and Engineering, 2 Center for Advanced Image and

More information

Local Features Tutorial: Nov. 8, 04

Local Features Tutorial: Nov. 8, 04 Local Features Tutorial: Nov. 8, 04 Local Features Tutorial References: Matlab SIFT tutorial (from course webpage) Lowe, David G. Distinctive Image Features from Scale Invariant Features, International

More information

Instance-level recognition

Instance-level recognition Instance-level recognition 1) Local invariant features 2) Matching and recognition with local features 3) Efficient visual search 4) Very large scale indexing Matching of descriptors Matching and 3D reconstruction

More information

3D model search and pose estimation from single images using VIP features

3D model search and pose estimation from single images using VIP features 3D model search and pose estimation from single images using VIP features Changchang Wu 2, Friedrich Fraundorfer 1, 1 Department of Computer Science ETH Zurich, Switzerland {fraundorfer, marc.pollefeys}@inf.ethz.ch

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

Local features and image matching. Prof. Xin Yang HUST

Local features and image matching. Prof. Xin Yang HUST Local features and image matching Prof. Xin Yang HUST Last time RANSAC for robust geometric transformation estimation Translation, Affine, Homography Image warping Given a 2D transformation T and a source

More information

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882 Matching features Building a Panorama Computational Photography, 6.88 Prof. Bill Freeman April 11, 006 Image and shape descriptors: Harris corner detectors and SIFT features. Suggested readings: Mikolajczyk

More information

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES An Undergraduate Research Scholars Thesis by RUI LIU Submitted to Honors and Undergraduate Research Texas A&M University in partial fulfillment

More information

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. Section 10 - Detectors part II Descriptors Mani Golparvar-Fard Department of Civil and Environmental Engineering 3129D, Newmark Civil Engineering

More information

A Comparison of SIFT, PCA-SIFT and SURF

A Comparison of SIFT, PCA-SIFT and SURF A Comparison of SIFT, PCA-SIFT and SURF Luo Juan Computer Graphics Lab, Chonbuk National University, Jeonju 561-756, South Korea qiuhehappy@hotmail.com Oubong Gwun Computer Graphics Lab, Chonbuk National

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Fast Global Reflectional Symmetry Detection for Robotic Grasping and Visual Tracking

Fast Global Reflectional Symmetry Detection for Robotic Grasping and Visual Tracking Fast Global Reflectional Symmetry Detection for Robotic Grasping and Visual Tracking Wai Ho Li, Alan M. Zhang and Lindsay Kleeman Centre for Perceptive and Intelligent Machines in Complex Environments:

More information

Using Geometric Blur for Point Correspondence

Using Geometric Blur for Point Correspondence 1 Using Geometric Blur for Point Correspondence Nisarg Vyas Electrical and Computer Engineering Department, Carnegie Mellon University, Pittsburgh, PA Abstract In computer vision applications, point correspondence

More information

CS 558: Computer Vision 4 th Set of Notes

CS 558: Computer Vision 4 th Set of Notes 1 CS 558: Computer Vision 4 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Overview Keypoint matching Hessian

More information

Video Google faces. Josef Sivic, Mark Everingham, Andrew Zisserman. Visual Geometry Group University of Oxford

Video Google faces. Josef Sivic, Mark Everingham, Andrew Zisserman. Visual Geometry Group University of Oxford Video Google faces Josef Sivic, Mark Everingham, Andrew Zisserman Visual Geometry Group University of Oxford The objective Retrieve all shots in a video, e.g. a feature length film, containing a particular

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Why do we care about matching features? Scale Invariant Feature Transform Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Automatic

More information

Perception IV: Place Recognition, Line Extraction

Perception IV: Place Recognition, Line Extraction Perception IV: Place Recognition, Line Extraction Davide Scaramuzza University of Zurich Margarita Chli, Paul Furgale, Marco Hutter, Roland Siegwart 1 Outline of Today s lecture Place recognition using

More information

LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS

LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS 8th International DAAAM Baltic Conference "INDUSTRIAL ENGINEERING - 19-21 April 2012, Tallinn, Estonia LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS Shvarts, D. & Tamre, M. Abstract: The

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

CS 223B Computer Vision Problem Set 3

CS 223B Computer Vision Problem Set 3 CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.

More information

Determining pose of a human face from a single monocular image

Determining pose of a human face from a single monocular image Determining pose of a human face from a single monocular image Jian-Gang Wang 1, Eric Sung 2, Ronda Venkateswarlu 1 1 Institute for Infocomm Research 21 Heng Mui Keng Terrace, Singapore 119613 2 Nanyang

More information

Fuzzy based Multiple Dictionary Bag of Words for Image Classification

Fuzzy based Multiple Dictionary Bag of Words for Image Classification Available online at www.sciencedirect.com Procedia Engineering 38 (2012 ) 2196 2206 International Conference on Modeling Optimisation and Computing Fuzzy based Multiple Dictionary Bag of Words for Image

More information

Lecture: RANSAC and feature detectors

Lecture: RANSAC and feature detectors Lecture: RANSAC and feature detectors Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 What we will learn today? A model fitting method for edge detection RANSAC Local invariant

More information

On Resolving Ambiguities in Arbitrary-Shape extraction by the Hough Transform

On Resolving Ambiguities in Arbitrary-Shape extraction by the Hough Transform On Resolving Ambiguities in Arbitrary-Shape extraction by the Hough Transform Eugenia Montiel 1, Alberto S. Aguado 2 and Mark S. Nixon 3 1 imagis, INRIA Rhône-Alpes, France 2 University of Surrey, UK 3

More information

An Adaptive Fusion Architecture for Target Tracking

An Adaptive Fusion Architecture for Target Tracking An Adaptive Fusion Architecture for Target Tracking Gareth Loy, Luke Fletcher, Nicholas Apostoloff and Alexander Zelinsky Department of Systems Engineering Research School of Information Sciences and Engineering

More information

DETECTORS AND DESCRIPTORS FOR PHOTOGRAMMETRIC APPLICATIONS

DETECTORS AND DESCRIPTORS FOR PHOTOGRAMMETRIC APPLICATIONS DETECTORS AND DESCRIPTORS FOR PHOTOGRAMMETRIC APPLICATIONS KEY WORDS: Features Detection, Orientation, Precision Fabio Remondino Institute for Geodesy and Photogrammetry, ETH Zurich, Switzerland E-mail:

More information

An Overview of Matchmoving using Structure from Motion Methods

An Overview of Matchmoving using Structure from Motion Methods An Overview of Matchmoving using Structure from Motion Methods Kamyar Haji Allahverdi Pour Department of Computer Engineering Sharif University of Technology Tehran, Iran Email: allahverdi@ce.sharif.edu

More information

Local invariant features

Local invariant features Local invariant features Tuesday, Oct 28 Kristen Grauman UT-Austin Today Some more Pset 2 results Pset 2 returned, pick up solutions Pset 3 is posted, due 11/11 Local invariant features Detection of interest

More information

Fitting: The Hough transform

Fitting: The Hough transform Fitting: The Hough transform Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not vote consistently for any single model Missing data

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Multi-Scale Kernel Operators for Reflection and Rotation Symmetry: Further Achievements

Multi-Scale Kernel Operators for Reflection and Rotation Symmetry: Further Achievements 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops Multi-Scale Kernel Operators for Reflection and Rotation Symmetry: Further Achievements Shripad Kondra Mando Softtech India Gurgaon

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction

More information

A Symmetry Operator and Its Application to the RoboCup

A Symmetry Operator and Its Application to the RoboCup A Symmetry Operator and Its Application to the RoboCup Kai Huebner Bremen Institute of Safe Systems, TZI, FB3 Universität Bremen, Postfach 330440, 28334 Bremen, Germany khuebner@tzi.de Abstract. At present,

More information

Shape from Texture: Surface Recovery Through Texture-Element Extraction

Shape from Texture: Surface Recovery Through Texture-Element Extraction Shape from Texture: Surface Recovery Through Texture-Element Extraction Vincent Levesque 1 Abstract Various visual cues are used by humans to recover 3D information from D images. One such cue is the distortion

More information

Global localization from a single feature correspondence

Global localization from a single feature correspondence Global localization from a single feature correspondence Friedrich Fraundorfer and Horst Bischof Institute for Computer Graphics and Vision Graz University of Technology {fraunfri,bischof}@icg.tu-graz.ac.at

More information

EXAM SOLUTIONS. Computer Vision Course 2D1420 Thursday, 11 th of march 2003,

EXAM SOLUTIONS. Computer Vision Course 2D1420 Thursday, 11 th of march 2003, Numerical Analysis and Computer Science, KTH Danica Kragic EXAM SOLUTIONS Computer Vision Course 2D1420 Thursday, 11 th of march 2003, 8.00 13.00 Exercise 1 (5*2=10 credits) Answer at most 5 of the following

More information

Implementing the Scale Invariant Feature Transform(SIFT) Method

Implementing the Scale Invariant Feature Transform(SIFT) Method Implementing the Scale Invariant Feature Transform(SIFT) Method YU MENG and Dr. Bernard Tiddeman(supervisor) Department of Computer Science University of St. Andrews yumeng@dcs.st-and.ac.uk Abstract The

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Determinant of homography-matrix-based multiple-object recognition

Determinant of homography-matrix-based multiple-object recognition Determinant of homography-matrix-based multiple-object recognition 1 Nagachetan Bangalore, Madhu Kiran, Anil Suryaprakash Visio Ingenii Limited F2-F3 Maxet House Liverpool Road Luton, LU1 1RS United Kingdom

More information

Computer Vision I - Appearance-based Matching and Projective Geometry

Computer Vision I - Appearance-based Matching and Projective Geometry Computer Vision I - Appearance-based Matching and Projective Geometry Carsten Rother 05/11/2015 Computer Vision I: Image Formation Process Roadmap for next four lectures Computer Vision I: Image Formation

More information

Matching Interest Points Using Projective Invariant Concentric Circles

Matching Interest Points Using Projective Invariant Concentric Circles Matching Interest Points Using Projective Invariant Concentric Circles Han-Pang Chiu omás Lozano-Pérez Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of echnology Abstract

More information

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Designing Applications that See Lecture 7: Object Recognition

Designing Applications that See Lecture 7: Object Recognition stanford hci group / cs377s Designing Applications that See Lecture 7: Object Recognition Dan Maynes-Aminzade 29 January 2008 Designing Applications that See http://cs377s.stanford.edu Reminders Pick up

More information

Part-based and local feature models for generic object recognition

Part-based and local feature models for generic object recognition Part-based and local feature models for generic object recognition May 28 th, 2015 Yong Jae Lee UC Davis Announcements PS2 grades up on SmartSite PS2 stats: Mean: 80.15 Standard Dev: 22.77 Vote on piazza

More information

III. VERVIEW OF THE METHODS

III. VERVIEW OF THE METHODS An Analytical Study of SIFT and SURF in Image Registration Vivek Kumar Gupta, Kanchan Cecil Department of Electronics & Telecommunication, Jabalpur engineering college, Jabalpur, India comparing the distance

More information

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,

More information

Camera Geometry II. COS 429 Princeton University

Camera Geometry II. COS 429 Princeton University Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence

More information

Region matching for omnidirectional images using virtual camera planes

Region matching for omnidirectional images using virtual camera planes Computer Vision Winter Workshop 2006, Ondřej Chum, Vojtěch Franc (eds.) Telč, Czech Republic, February 6 8 Czech Pattern Recognition Society Region matching for omnidirectional images using virtual camera

More information

Robotics Programming Laboratory

Robotics Programming Laboratory Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Methods for Representing and Recognizing 3D objects

Methods for Representing and Recognizing 3D objects Methods for Representing and Recognizing 3D objects part 1 Silvio Savarese University of Michigan at Ann Arbor Object: Building, 45º pose, 8-10 meters away Object: Person, back; 1-2 meters away Object:

More information

Estimation of Camera Pose with Respect to Terrestrial LiDAR Data

Estimation of Camera Pose with Respect to Terrestrial LiDAR Data Estimation of Camera Pose with Respect to Terrestrial LiDAR Data Wei Guan Suya You Guan Pang Computer Science Department University of Southern California, Los Angeles, USA Abstract In this paper, we present

More information

CS 231A Computer Vision (Fall 2012) Problem Set 3

CS 231A Computer Vision (Fall 2012) Problem Set 3 CS 231A Computer Vision (Fall 2012) Problem Set 3 Due: Nov. 13 th, 2012 (2:15pm) 1 Probabilistic Recursion for Tracking (20 points) In this problem you will derive a method for tracking a point of interest

More information

Motion Tracking and Event Understanding in Video Sequences

Motion Tracking and Event Understanding in Video Sequences Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Scale Invariant Feature Transform Why do we care about matching features? Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Image

More information

CS 231A Computer Vision (Winter 2014) Problem Set 3

CS 231A Computer Vision (Winter 2014) Problem Set 3 CS 231A Computer Vision (Winter 2014) Problem Set 3 Due: Feb. 18 th, 2015 (11:59pm) 1 Single Object Recognition Via SIFT (45 points) In his 2004 SIFT paper, David Lowe demonstrates impressive object recognition

More information

Scale-invariant shape features for recognition of object categories

Scale-invariant shape features for recognition of object categories Scale-invariant shape features for recognition of object categories Frédéric Jurie and Cordelia Schmid GRAVIR, INRIA-CNRS, 655 avenue de l Europe, Montbonnot 38330, France {Frederic.Jurie, Cordelia.Schmid}@inrialpes.fr,

More information

A Factorization Method for Structure from Planar Motion

A Factorization Method for Structure from Planar Motion A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College

More information

Previously. Part-based and local feature models for generic object recognition. Bag-of-words model 4/20/2011

Previously. Part-based and local feature models for generic object recognition. Bag-of-words model 4/20/2011 Previously Part-based and local feature models for generic object recognition Wed, April 20 UT-Austin Discriminative classifiers Boosting Nearest neighbors Support vector machines Useful for object recognition

More information

FESID: Finite Element Scale Invariant Detector

FESID: Finite Element Scale Invariant Detector : Finite Element Scale Invariant Detector Dermot Kerr 1,SonyaColeman 1, and Bryan Scotney 2 1 School of Computing and Intelligent Systems, University of Ulster, Magee, BT48 7JL, Northern Ireland 2 School

More information

Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives

Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns CAIP'95, pp. 874-879, Prague, Czech Republic, Sep 1995 Direct Obstacle Detection and Motion from Spatio-Temporal Derivatives

More information

Lecture 10 Detectors and descriptors

Lecture 10 Detectors and descriptors Lecture 10 Detectors and descriptors Properties of detectors Edge detectors Harris DoG Properties of detectors SIFT Shape context Silvio Savarese Lecture 10-26-Feb-14 From the 3D to 2D & vice versa P =

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information