Detecting Multiple Symmetries with Extended SIFT
|
|
- Antonia Mason
- 6 years ago
- Views:
Transcription
1 1 Detecting Multiple Symmetries with Extended SIFT 2 3 Anonymous ACCV submission Paper ID Abstract. This paper describes an effective method for detecting multiple bilateral symmetries of planar objects under perspective projection. The method can detect multiple symmetrical objects and the multiple symmetry axes of each object. In this paper, an extended SIFT feature called pseudo-affine invariant SIFT is proposed for detecting symmetric feature pairs that show different appearance in images due to perspective projection. Candidates of symmetry axes are obtained by finding the two projected mid-points of every two symmetric pairs based on the cross-ratios of four points on a line. The symmetry axis candidate that has the greatest number of symmetric pairs fitting it is detected as the most relevant symmetry axis of a symmetrical object. Other symmetric axes of the object are detected from the symmetric pairs belong to the symmetry axis. The procedure is applied repeatedly to the symmetric pair set after eliminating the ones of the detected symmetrical object to detect all symmetrical objects Introduction Bilateral reflection symmetry exists in many artificial objects, animals and plants. By detecting the multiple symmetries of objects, not only the global geometric properties such as symmetry axes can be obtained, but also the sets of pairs of image features at the symmetric positions on each symmetrical object can be obtained. This also has the effects of grouping image features and establishing pair-wise correspondence between features scattered in an image and thus, provides rich information about the global structure of objects. This paper proposes a powerful method for detecting multiple bilateral symmetries of planar objects under perspective projection, and grouping the associated symmetric pairs of features from a single view. We extend SIFT feature descriptor so that it can cope with affine transformations. This extension greatly increases the matching capability of the SIFT feature descriptor, and the symmetric pairs can be detected much more efficiently than the original SIFT while keeping low false positive detection rate. The candidates of symmetry axes are estimated by finding the two projected midpoints of every two symmetric pairs based on the cross-ratios of four points on a line. The symmetry axis candidate that has the greatest number of symmetric pairs fitting it is detected as the symmetry axis of a symmetrical object. All symmetry axes of the symmetrical object are detected by detecting the candidates of symmetry axes from the symmetric pairs associated with the object then selecting all the ones that have large number of symmetric pairs fitting them. This procedure is then applied repeatedly to the remained symmetric pairs after eliminating the ones associated with the detected symmetrical object to detect all symmetrical objects. The method can cope with the
2 2 ACCV-10 submission ID perspective distortions of not only the geometry arrangement, but also the appearance, of the symmetric pairs of features. The method is able to detect multiple symmetrical objects and the multiple symmetries within the same object. This method does not require that the geometry arrangement, the appearance, and the feature descriptor of the symmetric pairs of features remain symmetrical in input images. 2 Background There have been a lot of works dealing with the detection of symmetry in the computer vision for several decades (e.g. [1][2][3][4][5][6][7][8][9][10]. Symmetry detection has been used for many applications; including reconstruction[11][12], pattern recognition[13] and stereo vision[14]. Liu et al.[15] used edge-based features and exhaustive search to identify all single reflection symmetries. Since the algorithm was designed for analyzing artistic paper cutting patterns, the difficulty caused by perspective or affine distortion, image noise or complex background was not considered. Loy et al.[4] used SIFT to detected feature points and to find the pair wise matching. The symmetry was detected by voting for symmetry foci with Hough transform. It can detect reflection and rotation symmetry. In the algorithm of finding pair wise matching and estimating symmetric axes, the method used the orientations and the positions of image features directly without considering perspective or affine distortion of symmetrical objects. In contrast our method takes account of such distortions both in finding symmetric pairs and in estimating symmetry axes. Cornelius et al.[5] used several feature detection methods including SIFT, hessianaffine and Harris affine detector, to detect feature points and to find pair wise matching. The symmetry was detected by voting for symmetry foci with Hough transform. It can detect reflection symmetry in perspective projection. However, the detail of the algorithm for finding pair wise matching was not given in the paper. 3 Pseudo-affine invariant SIFT features for detecting symmetric pairs Due to perspective projection, the image patches of the parts of a symmetrical object at symmetrical positions show different appearances. The appearances may be different in scale, orientation and skew, as well as reflection. In order to detect symmetric pairs of feature points in images, we need a feature detector that can detect distinctive points with good repeatability, and give feature descriptors that can be used for estimating the similarities between features in the case that the perspective distortion occurs. Many symmetry detection methods used SIFT feature detector because of its good performance for detecting features and its information rich feature descriptor. While the SIFT feature descriptor is scale- and rotation-invariant, it is not invariant for skew distortion. Here we extend the SIFT descriptor so that it can be used to find pair wise matching among image patches that contain scale, rotation and skew distortion. We first use the SIFT to detect feature points in images. The SIFT gives the orientation and the scale of features in addition to the positions. In order to give the SIFT
3 ACCV-10 submission ID feature descriptor the ability to estimate the similarity between two image features that contain skew distortions, we define several skewed coordinates systems and their mirrored versions (by flipping about the Y axis) for the image patch that was used to compute the SIFT descriptor (See figure 1). The Y axis is aligned to the orientation of the detected feature. We quantize and enumerate the angle between X-Y axes of the skewed coordinates systems. In the experiments, we let the angles be 60, 90 and 120 degrees. Each angle is then used to define a skewed coordinates system, which is then used normalize the image patch so that the skewed axes become orthogonal. We use each of the normalized image patches to re-calculate a SIFT descriptor. Thus, each feature point (p) will have a set of SIFT descriptors ( f(p) ). f(p) ={f i (p), m i (p)}, i =1, 2, 3. (1) where f i (p) and m i (p) is the SIFT descriptor (a feature vector of 128 dimensions) computed using i-th skewed coordinates system and its mirrored version, respectively. We call this extended SIFT as pseudo-affine SIFT features or simply PA-SIFTs. Fig. 1. Definition of pseudo-affine SIFT features (PA-SIFTs) Detecting symmetric pairs with PA-SIFTs In order to detect symmetric pairs of features, the difference between any two features is estimated by considering the feature difference and the scale difference between them. The feature difference between two feature points p and q is computed from the PA- SIFTs of them as F (f(p), f(q)) = min ( f i(p) m j (q), m i (p) f j (q) ). (2) i {1,2,3},j {1,2,3}
4 4 ACCV-10 submission ID The scale difference between two feature points p and q is computed from the scales of them obtained from the SIFT detector as S(p, q) = max(s(p),s(q)) min(s(p),s(q)), (3) where s(p) and s(q) are the scales of p and q, respectively. The difference between two feature points p and q defined by D(p, q) =F (f(p), f(q))s(p, q). (4) If D(p, q) is less than a predefined threshold T D, then the pair of feature p and q is detected as a symmetric pair. Figure 4 shows an example result. The detected feature points are shown in figure 4(a), and the detected symmetric pairs are shown in figure 4(b) Detecting multiple symmetries in an image After the symmetric pairs were detected, we estimate a symmetry axis candidate from every two symmetric pairs by assuming that they are on the same symmetrical planar object. For each candidate, we estimate an average fitness of all symmetric pairs to it. A symmetrical object can be detected by finding the symmetry axis candidate that has the highest evaluation. The symmetric pairs on this symmetrical object can be obtained by selecting the symmetric pairs that have high fitness value to the detected symmetry axis. Then we use these symmetric pairs to detect multiple symmetry axes of the detected symmetrical object. In order to detect other symmetrical objects in the image, we discard all pairs belong to the detected symmetrical object then apply the procedure for detecting one symmetrical object to the remained symmetric pairs again and again until no more candidates symmetry axes with high fitness can be found Detecting symmetric axis candidates from symmetric pairs After detecting symmetric pairs, we detect an axis candidate of symmetry from each two symmetric pairs. Fig.2 shows two symmetric pairs {p i, q i } and {p j, q j } on the same symmetrical planar object thus sharing the same symmetry axis. This symmetry axis can be determined by estimating the two projected midpoints of 3D points of two pairs. Since the lines connecting symmetric pairs are parallel in 3D space, the vanishing point S ij of them is their intersection. The projected midpoints of the two symmetric pairs can be computed by using this vanishing point based on cross-ratios of four points on a line in perspective images. Let m i be the projected mid-point of {p i, q i }, and M i, S ij, P i, Q i be the 3D point of m i, p i, q i, respectively. According to the cross-ratios of four points on a line in perspective images, we have the following equations. q i p i s ij m i q i m i s ij p i = Q i P i S ij M i Q i M i S ij P i, (5)
5 ACCV-10 submission ID Fig. 2. An axis candidate of symmetry is estimated from two symmetric pairs. 131 s ij m i = q i m i + s ij q i Q i P i =2 Q i M i S ij M i = Q i M i + S ij P i =2 Q i M i Solving eq.(5) and eq.(6) we obtain: q i m i = q i p i s ij q i 2 s ij p i q i p i (6) (7) 133 The projected mid-point m i can be obtained as m i = q i s ij q i 2 s ij p i q i p i (q i p i ). (8) 134 Similarly, we also have m j = q j s ij q j 2 s ij p j q j p j (q j p j ). (9) The line connecting m i and m j describes the symmetry axis. We use this line l ij and the vanishing point s ij that indicates the 3D orientation of the lines connecting the symmetric pairs, to describe the symmetry axis. M ij = {l ij, s ij } (10) Detecting the most relevant symmetry axis and the associated symmetric pairs We estimate a set of candidates of symmetry axes and their parameters as described in sub-section 5.1 by using all combinations of any two symmetric pairs. For each
6 6 ACCV-10 submission ID symmetry axis candidate, we estimate a degree for every symmetric pair that indicates how well it fits the symmetry axis candidate by considering the orientation of the line connecting the symmetric pair and the position of its projected midpoint. (a) (b) Fig. 3. Estimating the fitness of {p, q} to the symmetry axis l ij As shown in Fig.3(a), {p, q} is a symmetric pair, l ij is a symmetry axis candidate and s ij is the vanishing point of the symmetric pairs used for estimating l ij.if{p, q} is a symmetric pair of l ij then the line connecting them should pass s ij. Here we compute two angles θ p and θ q : { θp = s ij pq (11) θ q = π s ij qp We define an angle θ e that describes the difference between the 3D orientation of the line {p, q} and the vanishing point s ij as follows. θ e =max(θ p,θ q ) (12) As shown in figure 3(b), M p is the intersection of the line connecting {s ij, p} and l ij, and M q is the intersection of the line connecting {s ij, q} and l ij.ifpand q is a symmetric pair of l ij then M p and M q should overlap at the projected mid-point of p and q. Since any feature point in a symmetric pair can be computed from the other feature point, the projected mid-point and the vanishing point based on the cross-ratios, the correct position of p and q can be computed as p and q from {s ij, q, M q } and {s ij, p, M p }, respectively. We compute the two distances: { dp = p p d q = q q (13) In the case that p and q is a symmetric pair of l ij, p and q should overlap p and q, respectively and both d p and d q should be 0. We define a normalized distance d e that describes how well the symmetric pair fits l ij by considering the position of its projected midpoint as follows. d e = max(d p,d q ) d pq (14)
7 ACCV-10 submission ID Here, d pq = p q is for normalization. We use Mahalanobis distance to describe how well that the symmetric pair {p, q} fits the symmetry axis described by M ij as follows. ( v(m ij, p, q) =exp 1 ) 2 xt Σ 1 x, (15) Where x =[θ e d e ] T. Assuming that the correlations between θ e and d e can be neglected, Σ can be expressed by a diagonal matrix as follows. [ ] σθ 0 Σ =, (16) 0 σ d where σ θ and σ d are the standard deviations of θ e and d e, respectively. In order to estimate how well that a symmetry axis candidate fits the symmetric pairs, we compute the average fitness to all symmetric pair. V (M ij )= nsp i=1 v(m ij, p i, q i ) n sp, (17) where n sp is the number of detected symmetric pairs. We compute V for each symmetry axis candidate described by M ij and select the one with the greatest value as the most relevant symmetry axis of the symmetric planar object. M object = argmin M ij W m {V (M ij )}, (18) where W m is the set of all candidates of symmetry axes. We then compute the fitness of each symmetric pair to M object with eq.(15) and select the ones as the symmetric pairs of the symmetric planar object if v n (M object ) > T sp. Here, T sp is a threshold value. Fig.4 shows an example result. Fig.4(b) shows the detected symmetric pais, and Fig.4(c) shows the detected symmetry axis and the symmetric pairs of it. (a) Detected feature points (b) Symmetric pairs (c) Symmetry axis and its symmetric pairs Fig. 4. Detecting the most relevant symmetry axis in an image.
8 8 ACCV-10 submission ID Detecting multiple symmetries In this section we expand the approach described in the previous section to be able to find multiple symmetries in the same image. In general, two cases exist: - There are two or more symmetrical planar objects in an image. - An object has more than one symmetry axes. In order to detect multiple symmetrical objects in an image, one may simply repeat the process of finding the symmetry axes until the average fitness of the found symmetry axis is less than a certain threshold. The main drawback of this method is that the evaluation of the next symmetrical object is affected by symmetric pairs that were already used by the previous symmetrical object and therefore, it can decrease the effectiveness of the detection. To minimize the error of the next symmetrical planar objects detection; we propose to discard all pairs that belong to the previous symmetrical objects and to not take it into consideration in the evaluation of the next symmetrical objects. For each detected symmetrical object, we constitute a rectangular bounding box aligned to the symmetry axis that just contains all the symmetric pairs of the object. This bounding box is an approximation of the area of the detected symmetrical object. All symmetric pairs having at least one of the feature points inside this area are discarded. Then we re-find the most relevant symmetry axis from the existing symmetric pairs and so on. The process is repeated until the estimated symmetry axis has less than 2 pairs (this value is determined experimentally). After all symmetrical objects have been detected; we try to find all symmetry axes in each symmetrical object by using the symmetric pairs belonging to it. We first detect the most relevant symmetry axis which has the highest evaluation of the fitness. We determine the pair that belong this symmetry axis. Then we determine the second highest evaluation symmetry axis by using always all pairs belonging to the object and so on. We repeat the process until the evaluation is less than a percentage of the maximum evaluation (the first maximum). In this paper we choose the percentage of 80% Experimental results The effectiveness of the proposed method was confirmed through the experiments that used simulation images and real images. In the experiments, we let the number of feature points detected by the SIFT be about 500 points. The sizes of local image patches for computing PA-SIFTs was determined by the scale given by SIFT. The standard deviation of the angle error σ θ and the distance error σ d was set to 3.5( ) and 0.15(pixel) respectively according to the results of some preliminary experiments. The threshold T sp was set to The comparative experiments of detection accuracy In this experiment, we compared the performance of three feature descriptors for detecting symmetric pairs: (a) original SIFT, (b) mirrored SIFT and (c) PA-SIFTs. We detected the symmetric pairs and used them to detect the symmetry axes by using (a), (b) and (c). Then we compared the accuracy of the symmetry axes detected with (a), (b)
9 ACCV-10 submission ID (a) Generating simulation images (b) Accuracy evaluation Fig. 5. The experiment for evaluating detection accuracy and (c). The skewed images are generated by rotating the vertical axis of the original image by θ t.weletθ t be 50, 60, 70, 80, 90, 100, 110, 120, and 130( ), and generate nine simulation images of pixel, which were used as input images. The detection accuracy was evaluated by using the inclination error θ e between the true symmetry axis and the detected symmetry axis, and intercept error d e on the original vertical axis of the image (see Fig.5(b)). { θe = θ t θ d d e = d (19) Table 1. Detection accuracy (d e: pixel, θ e: ) mean Maximum Minimum Standard deviation (a) d e θ e (b) d e θ e (c) d e θ e The experimental results of detection accuracy using the feature descriptor (a), (b) and (c) are summarized in table 1. These results showed that our PA-SIFTs give significantly smaller detection errors than the original SIFT and the mirrored SIFT. This indicates that the detection of symmetrical planar objects using the PA-SIFTs is robust and sufficiently accurate for all simulation images. Figure 6 shows some experimental results of symmetrical planer object detection using our PA-SIFTs. In this figure, the true symmetry axis overlaps with the detected symmetry axis. We confirmed that the symmetric pairs and the symmetry axis were detected successfully in each simulation image.
10 10 ACCV-10 submission ID 388 θt = 50 θt = 60 θt = 70 θt = 80 θt = 90 θt = 100 θt = 110 θt = 120 θt = 130 Fig. 6. Example results of detecting symmetry using PA-SIFTs Experiments using real images Figure 7 shows some results of the symmetry detection in real images. We confirmed that the symmetrical planar objects could be detected successfully with our method even when the images showed significant deformation due to perspective projection Experiments using images in common databases We also tested our method with images selected from some common databases: Caltech256 Object Category Dataset [16] and MSRC Object class recognition database A and B1 [17]. Some results are shown in figure 8 where the symmetry axes and symmetric pairs were detected successfully Conclusion 245 In this paper, we have proposed an extended SIFT feature PA-SIFTs that can be used to detect symmetric pairs efficiently in perspective images. We also have proposed the method for detecting multiple bilateral symmetries of planar objects in perspective images, which can detect the multiple symmetrical objects and all the symmetry axes in echo of the symmetrical object
11 ACCV-10 submission ID Fig. 7. Many example results of symmetry detection. 253 Through the comprehensive experiments using not only the simulation images but the real images and the images in common databases, we have confirmed our method could detect symmetry axes and symmetric pairs of planar objects robustly and accurately in various input images. 254 References M. Part, S. Lee, P.C.C., et al.: Performance evaluation of state-of-the-art discrete symmetry detection algorithm. Proc. of CVPR (2008) Gupta, R., Mittal, A.: Illumination and affine- invariant point matching using an ordinal approach. Proc. of ICCV (2007) Raymond, Y., Tam, P.: Application of elliptic fourier descriptors to symmetry detection under parallel projection. TPAMI 16 (1994) Loy, G., Eklundh, J.: Detecting symmetry and symmetric constellations of features. Proc. of ECCV 3952/2006 (2006) Cornelius, H., Loy, G.: Detecting bilateral symmetry in perspective. Proc. of POCV (2006) 6. Marola, G.: A technique for finding the symmetry axes of implicit polynomial curves under perspective projection. TPAMI 27 (2005) Ha, V., Moura, J.: Affine-permutation invariance of 2-d shapes. Trans. on Image Processing 14 (2005)
12 12 ACCV-10 submission ID 388 MSRC object categorty dataset B1 (Category: Face) Caltech-255 (Category: Tower) Caltech-255 (Category: Umbrella) Fig. 8. Experimental results using images in common databases J.Podolak, P., et al.: A planar-reflective symmetry transform for 3d shapes. Proc. of ACM SIGGRAPH (2006) H. Zabrodsky, S.P., Avnir, D.: Symmetry as a continuous feature. Trans. PAMI 17 (1995) D. Reisfeld, H.W., Yeshurun, Y.: Context-free attentional operators: The generalized symmetry transform. IJCV 14 (1995) K. Huang, W.H. Li, A.Y., Ma, Y.: On symmetry and multiple-view geometry:structure,pose,and calibration from a single image. IJCV 60 (2004) D.Q.Huynh: Affine reconstruction from monocular vision in the presence of a symmetry plane. Proc. of ICCV 1 (1999) Bigun, J.: Pattern recognition in images by symmetries and coordinate transformations. Computer Vision and Image Understanding 3 (1997) Li, W., Kleeman, L.: Fast stereo triangulation using symmetry. Australasian Conference on Robotics and Automation (2006) Y.Liu, J.Hays, Y., Shum, H.: Digital papercutting. SIGGRAPH Technical Sketch, ACM (2005) 16. G. Griffin, A.H., Perona, P.: Caltech-256 object category dataset. Technical Report 7694 (2007) California Institute of Technology. 17. J. Winn, A.C., Minka, T.: Object categorization by learned universal visual dictionary. Proc. of ICCV 2 (2005)
Detecting Bilateral Symmetry in Perspective
Detecting Bilateral Symmetry in Perspective Hugo Cornelius and Gareth Loy Computational Vision & Active Perception Laboratory Royal Institute of Technology (KTH), Stockholm, Sweden {hugoc,gareth}@nada.kth.se
More informationMirror-symmetry in images and 3D shapes
Journées de Géométrie Algorithmique CIRM Luminy 2013 Mirror-symmetry in images and 3D shapes Viorica Pătrăucean Geometrica, Inria Saclay Symmetry is all around... In nature: mirror, rotational, helical,
More informationSYMMETRY DETECTION VIA CONTOUR GROUPING
SYMMETRY DETECTION VIA CONTOUR GROUPING Yansheng Ming, Hongdong Li Australian National University Xuming He NICTA ABSTRACT This paper presents a simple but effective model for detecting the symmetric axes
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationFeature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking
Feature descriptors Alain Pagani Prof. Didier Stricker Computer Vision: Object and People Tracking 1 Overview Previous lectures: Feature extraction Today: Gradiant/edge Points (Kanade-Tomasi + Harris)
More informationMulti-Scale Kernel Operators for Reflection and Rotation Symmetry: Further Achievements
2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops Multi-Scale Kernel Operators for Reflection and Rotation Symmetry: Further Achievements Shripad Kondra Mando Softtech India Gurgaon
More information2D Image Processing Feature Descriptors
2D Image Processing Feature Descriptors Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Overview
More informationGenetic Fourier Descriptor for the Detection of Rotational Symmetry
1 Genetic Fourier Descriptor for the Detection of Rotational Symmetry Raymond K. K. Yip Department of Information and Applied Technology, Hong Kong Institute of Education 10 Lo Ping Road, Tai Po, New Territories,
More informationDescriptorEnsemble: An Unsupervised Approach to Image Matching and Alignment with Multiple Descriptors
DescriptorEnsemble: An Unsupervised Approach to Image Matching and Alignment with Multiple Descriptors 林彥宇副研究員 Yen-Yu Lin, Associate Research Fellow 中央研究院資訊科技創新研究中心 Research Center for IT Innovation, Academia
More informationEvaluation and comparison of interest points/regions
Introduction Evaluation and comparison of interest points/regions Quantitative evaluation of interest point/region detectors points / regions at the same relative location and area Repeatability rate :
More informationCSE 252B: Computer Vision II
CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points
More informationCS231A Midterm Review. Friday 5/6/2016
CS231A Midterm Review Friday 5/6/2016 Outline General Logistics Camera Models Non-perspective cameras Calibration Single View Metrology Epipolar Geometry Structure from Motion Active Stereo and Volumetric
More informationOBJECT detection in general has many applications
1 Implementing Rectangle Detection using Windowed Hough Transform Akhil Singh, Music Engineering, University of Miami Abstract This paper implements Jung and Schramm s method to use Hough Transform for
More informationLocal invariant features
Local invariant features Tuesday, Oct 28 Kristen Grauman UT-Austin Today Some more Pset 2 results Pset 2 returned, pick up solutions Pset 3 is posted, due 11/11 Local invariant features Detection of interest
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationUsing Geometric Blur for Point Correspondence
1 Using Geometric Blur for Point Correspondence Nisarg Vyas Electrical and Computer Engineering Department, Carnegie Mellon University, Pittsburgh, PA Abstract In computer vision applications, point correspondence
More informationLocal features: detection and description. Local invariant features
Local features: detection and description Local invariant features Detection of interest points Harris corner detection Scale invariant blob detection: LoG Description of local patches SIFT : Histograms
More informationSUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS
SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract
More informationScale Invariant Feature Transform
Why do we care about matching features? Scale Invariant Feature Transform Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Automatic
More informationMotion Estimation and Optical Flow Tracking
Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction
More informationSYMMETRY-BASED COMPLETION
SYMMETRY-BASED COMPLETION Thiago Pereira 1 Renato Paes Leme 2 Luiz Velho 1 Thomas Lewiner 3 1 Visgraf, IMPA 2 Computer Science, Cornell 3 Matmidia, PUC Rio Keywords: Abstract: Image completion, Inpainting,
More informationFeature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies
Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of
More information3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,
3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates
More informationObject and Class Recognition I:
Object and Class Recognition I: Object Recognition Lectures 10 Sources ICCV 2005 short courses Li Fei-Fei (UIUC), Rob Fergus (Oxford-MIT), Antonio Torralba (MIT) http://people.csail.mit.edu/torralba/iccv2005
More informationInstance-level recognition part 2
Visual Recognition and Machine Learning Summer School Paris 2011 Instance-level recognition part 2 Josef Sivic http://www.di.ens.fr/~josef INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire d Informatique,
More informationViewpoint Invariant Features from Single Images Using 3D Geometry
Viewpoint Invariant Features from Single Images Using 3D Geometry Yanpeng Cao and John McDonald Department of Computer Science National University of Ireland, Maynooth, Ireland {y.cao,johnmcd}@cs.nuim.ie
More informationThe SIFT (Scale Invariant Feature
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical
More informationImproving Shape retrieval by Spectral Matching and Meta Similarity
1 / 21 Improving Shape retrieval by Spectral Matching and Meta Similarity Amir Egozi (BGU), Yosi Keller (BIU) and Hugo Guterman (BGU) Department of Electrical and Computer Engineering, Ben-Gurion University
More informationCombining Appearance and Topology for Wide
Combining Appearance and Topology for Wide Baseline Matching Dennis Tell and Stefan Carlsson Presented by: Josh Wills Image Point Correspondences Critical foundation for many vision applications 3-D reconstruction,
More informationSIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014
SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image
More informationLocal features and image matching. Prof. Xin Yang HUST
Local features and image matching Prof. Xin Yang HUST Last time RANSAC for robust geometric transformation estimation Translation, Affine, Homography Image warping Given a 2D transformation T and a source
More informationLecture 8: Fitting. Tuesday, Sept 25
Lecture 8: Fitting Tuesday, Sept 25 Announcements, schedule Grad student extensions Due end of term Data sets, suggestions Reminder: Midterm Tuesday 10/9 Problem set 2 out Thursday, due 10/11 Outline Review
More informationLecture 8 Fitting and Matching
Lecture 8 Fitting and Matching Problem formulation Least square methods RANSAC Hough transforms Multi-model fitting Fitting helps matching! Reading: [HZ] Chapter: 4 Estimation 2D projective transformation
More informationCSE 527: Introduction to Computer Vision
CSE 527: Introduction to Computer Vision Week 5 - Class 1: Matching, Stitching, Registration September 26th, 2017 ??? Recap Today Feature Matching Image Alignment Panoramas HW2! Feature Matches Feature
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationInstance-level recognition
Instance-level recognition 1) Local invariant features 2) Matching and recognition with local features 3) Efficient visual search 4) Very large scale indexing Matching of descriptors Matching and 3D reconstruction
More informationCamera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration
Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1
More informationPreviously. Part-based and local feature models for generic object recognition. Bag-of-words model 4/20/2011
Previously Part-based and local feature models for generic object recognition Wed, April 20 UT-Austin Discriminative classifiers Boosting Nearest neighbors Support vector machines Useful for object recognition
More informationInstance-level recognition
Instance-level recognition 1) Local invariant features 2) Matching and recognition with local features 3) Efficient visual search 4) Very large scale indexing Matching of descriptors Matching and 3D reconstruction
More informationEfficient Symmetry Detection Using Local Affine Frames
Efficient Symmetry Detection Using Local Affine Frames Hugo Cornelius 1,MichalPerd och 2,Jiří Matas 2,andGarethLoy 1 1 CVAP, Royal Institute of Technology, Stockholm, Sweden 2 Center for Machine Perception,
More informationPart-based and local feature models for generic object recognition
Part-based and local feature models for generic object recognition May 28 th, 2015 Yong Jae Lee UC Davis Announcements PS2 grades up on SmartSite PS2 stats: Mean: 80.15 Standard Dev: 22.77 Vote on piazza
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationLecture 9 Fitting and Matching
Lecture 9 Fitting and Matching Problem formulation Least square methods RANSAC Hough transforms Multi- model fitting Fitting helps matching! Reading: [HZ] Chapter: 4 Estimation 2D projective transformation
More information3D shape from the structure of pencils of planes and geometric constraints
3D shape from the structure of pencils of planes and geometric constraints Paper ID: 691 Abstract. Active stereo systems using structured light has been used as practical solutions for 3D measurements.
More informationCS4670: Computer Vision
CS4670: Computer Vision Noah Snavely Lecture 6: Feature matching and alignment Szeliski: Chapter 6.1 Reading Last time: Corners and blobs Scale-space blob detector: Example Feature descriptors We know
More informationCS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing
CS 4495 Computer Vision Features 2 SIFT descriptor Aaron Bobick School of Interactive Computing Administrivia PS 3: Out due Oct 6 th. Features recap: Goal is to find corresponding locations in two images.
More informationShape Descriptor using Polar Plot for Shape Recognition.
Shape Descriptor using Polar Plot for Shape Recognition. Brijesh Pillai ECE Graduate Student, Clemson University bpillai@clemson.edu Abstract : This paper presents my work on computing shape models that
More informationA Survey of Light Source Detection Methods
A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light
More informationAdaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision
Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Zhiyan Zhang 1, Wei Qian 1, Lei Pan 1 & Yanjun Li 1 1 University of Shanghai for Science and Technology, China
More informationTranslation Symmetry Detection: A Repetitive Pattern Analysis Approach
2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops Translation Symmetry Detection: A Repetitive Pattern Analysis Approach Yunliang Cai and George Baciu GAMA Lab, Department of Computing
More informationLarge-Scale 3D Point Cloud Processing Tutorial 2013
Large-Scale 3D Point Cloud Processing Tutorial 2013 Features The image depicts how our robot Irma3D sees itself in a mirror. The laser looking into itself creates distortions as well as changes in Prof.
More informationInstance-level recognition II.
Reconnaissance d objets et vision artificielle 2010 Instance-level recognition II. Josef Sivic http://www.di.ens.fr/~josef INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire d Informatique, Ecole Normale
More informationCamera Geometry II. COS 429 Princeton University
Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence
More informationScale Invariant Feature Transform
Scale Invariant Feature Transform Why do we care about matching features? Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Image
More informationOutline 7/2/201011/6/
Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern
More informationFeature Detectors and Descriptors: Corners, Lines, etc.
Feature Detectors and Descriptors: Corners, Lines, etc. Edges vs. Corners Edges = maxima in intensity gradient Edges vs. Corners Corners = lots of variation in direction of gradient in a small neighborhood
More informationJoint Vanishing Point Extraction and Tracking. 9. June 2015 CVPR 2015 Till Kroeger, Dengxin Dai, Luc Van Gool, Computer Vision ETH Zürich
Joint Vanishing Point Extraction and Tracking 9. June 2015 CVPR 2015 Till Kroeger, Dengxin Dai, Luc Van Gool, Computer Vision Lab @ ETH Zürich Definition: Vanishing Point = Intersection of 2D line segments,
More informationBuilding a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882
Matching features Building a Panorama Computational Photography, 6.88 Prof. Bill Freeman April 11, 006 Image and shape descriptors: Harris corner detectors and SIFT features. Suggested readings: Mikolajczyk
More informationCEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.
CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. Section 10 - Detectors part II Descriptors Mani Golparvar-Fard Department of Civil and Environmental Engineering 3129D, Newmark Civil Engineering
More informationA Symmetry Operator and Its Application to the RoboCup
A Symmetry Operator and Its Application to the RoboCup Kai Huebner Bremen Institute of Safe Systems, TZI, FB3 Universität Bremen, Postfach 330440, 28334 Bremen, Germany khuebner@tzi.de Abstract. At present,
More informationShape Matching. Brandon Smith and Shengnan Wang Computer Vision CS766 Fall 2007
Shape Matching Brandon Smith and Shengnan Wang Computer Vision CS766 Fall 2007 Outline Introduction and Background Uses of shape matching Kinds of shape matching Support Vector Machine (SVM) Matching with
More informationDetecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds
9 1th International Conference on Document Analysis and Recognition Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds Weihan Sun, Koichi Kise Graduate School
More informationDigital Image Processing
Digital Image Processing Part 9: Representation and Description AASS Learning Systems Lab, Dep. Teknik Room T1209 (Fr, 11-12 o'clock) achim.lilienthal@oru.se Course Book Chapter 11 2011-05-17 Contents
More informationImage correspondences and structure from motion
Image correspondences and structure from motion http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 20 Course announcements Homework 5 posted.
More informationComputer Vision I - Appearance-based Matching and Projective Geometry
Computer Vision I - Appearance-based Matching and Projective Geometry Carsten Rother 05/11/2015 Computer Vision I: Image Formation Process Roadmap for next four lectures Computer Vision I: Image Formation
More informationEXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,
School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45
More informationFeature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1
Feature Detection Raul Queiroz Feitosa 3/30/2017 Feature Detection 1 Objetive This chapter discusses the correspondence problem and presents approaches to solve it. 3/30/2017 Feature Detection 2 Outline
More informationResearch on Multi-sensor Image Matching Algorithm Based on Improved Line Segments Feature
ITM Web of Conferences, 0500 (07) DOI: 0.05/ itmconf/070500 IST07 Research on Multi-sensor Image Matching Algorithm Based on Improved Line Segments Feature Hui YUAN,a, Ying-Guang HAO and Jun-Min LIU Dalian
More informationExpanding gait identification methods from straight to curved trajectories
Expanding gait identification methods from straight to curved trajectories Yumi Iwashita, Ryo Kurazume Kyushu University 744 Motooka Nishi-ku Fukuoka, Japan yumi@ieee.org Abstract Conventional methods
More informationCOMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION
COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA
More informationMotion Tracking and Event Understanding in Video Sequences
Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!
More informationComputer Vision I - Algorithms and Applications: Multi-View 3D reconstruction
Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View
More informationA Factorization Method for Structure from Planar Motion
A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College
More informationProf. Feng Liu. Spring /26/2017
Prof. Feng Liu Spring 2017 http://www.cs.pdx.edu/~fliu/courses/cs510/ 04/26/2017 Last Time Re-lighting HDR 2 Today Panorama Overview Feature detection Mid-term project presentation Not real mid-term 6
More informationAPPLICATION OF RADON TRANSFORM IN CT IMAGE MATCHING Yufang Cai, Kuan Shen, Jue Wang ICT Research Center of Chongqing University, Chongqing, P.R.
APPLICATION OF RADON TRANSFORM IN CT IMAGE MATCHING Yufang Cai, Kuan Shen, Jue Wang ICT Research Center of Chongqing University, Chongqing, P.R.China Abstract: When Industrial Computerized Tomography (CT)
More informationFitting: Voting and the Hough Transform April 23 rd, Yong Jae Lee UC Davis
Fitting: Voting and the Hough Transform April 23 rd, 2015 Yong Jae Lee UC Davis Last time: Grouping Bottom-up segmentation via clustering To find mid-level regions, tokens General choices -- features,
More informationAbsolute Scale Structure from Motion Using a Refractive Plate
Absolute Scale Structure from Motion Using a Refractive Plate Akira Shibata, Hiromitsu Fujii, Atsushi Yamashita and Hajime Asama Abstract Three-dimensional (3D) measurement methods are becoming more and
More informationVisual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech
Visual Odometry Features, Tracking, Essential Matrix, and RANSAC Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline The
More informationKey properties of local features
Key properties of local features Locality, robust against occlusions Must be highly distinctive, a good feature should allow for correct object identification with low probability of mismatch Easy to etract
More informationPatch-based Object Recognition. Basic Idea
Patch-based Object Recognition 1! Basic Idea Determine interest points in image Determine local image properties around interest points Use local image properties for object classification Example: Interest
More informationAutomatic Feature Extraction of Pose-measuring System Based on Geometric Invariants
Automatic Feature Extraction of Pose-measuring System Based on Geometric Invariants Yan Lin 1,2 Bin Kong 2 Fei Zheng 2 1 Center for Biomimetic Sensing and Control Research, Institute of Intelligent Machines,
More informationCOMPUTER AND ROBOT VISION
VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington T V ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California
More informationProperties of Quadratic functions
Name Today s Learning Goals: #1 How do we determine the axis of symmetry and vertex of a quadratic function? Properties of Quadratic functions Date 5-1 Properties of a Quadratic Function A quadratic equation
More informationStructure from motion
Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera
More informationDetecting mirror-symmetry of a volumetric shape from its single 2D image
Detecting mirror-symmetry of a volumetric shape from its single D image Tadamasa Sawada Zygmunt Pizlo Department of Psychological Sciences, Purdue University, West Lafayette, IN, USA tsawada@psych.purdue.edu
More informationHarder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford
Image matching Harder case by Diva Sian by Diva Sian by scgbt by swashford Even harder case Harder still? How the Afghan Girl was Identified by Her Iris Patterns Read the story NASA Mars Rover images Answer
More informationLocal features: detection and description May 12 th, 2015
Local features: detection and description May 12 th, 2015 Yong Jae Lee UC Davis Announcements PS1 grades up on SmartSite PS1 stats: Mean: 83.26 Standard Dev: 28.51 PS2 deadline extended to Saturday, 11:59
More informationLecture 16: Object recognition: Part-based generative models
Lecture 16: Object recognition: Part-based generative models Professor Stanford Vision Lab 1 What we will learn today? Introduction Constellation model Weakly supervised training One-shot learning (Problem
More informationComputer Vision for HCI. Topics of This Lecture
Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi
More informationPlanar Symmetry Detection by Random Sampling and Voting Process
Planar Symmetry Detection by Random Sampling and Voting Process Atsushi Imiya, Tomoki Ueno, and Iris Fermin Dept. of IIS, Chiba University, 1-33, Yayo-cho, Inage-ku, Chiba, 263-8522, Japan imiya@ics.tj.chiba-u.ac.jp
More informationCHAPTER 3. Single-view Geometry. 1. Consequences of Projection
CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.
More informationDetection of Mirror Symmetric Image Patches
2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops Detection of Mirror Symmetric Image Patches Viorica Pătrăucean École Polytechnique Palaiseau, France Military Technical Academy
More informationClassifying Images with Visual/Textual Cues. By Steven Kappes and Yan Cao
Classifying Images with Visual/Textual Cues By Steven Kappes and Yan Cao Motivation Image search Building large sets of classified images Robotics Background Object recognition is unsolved Deformable shaped
More informationUniversity of Southern California, 1590 the Alameda #200 Los Angeles, CA San Jose, CA Abstract
Mirror Symmetry 2-View Stereo Geometry Alexandre R.J. François +, Gérard G. Medioni + and Roman Waupotitsch * + Institute for Robotics and Intelligent Systems * Geometrix Inc. University of Southern California,
More informationImage matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian.
Announcements Project 1 Out today Help session at the end of class Image matching by Diva Sian by swashford Harder case Even harder case How the Afghan Girl was Identified by Her Iris Patterns Read the
More informationLocal Feature Detectors
Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,
More informationAnnouncements. Recognition (Part 3) Model-Based Vision. A Rough Recognition Spectrum. Pose consistency. Recognition by Hypothesize and Test
Announcements (Part 3) CSE 152 Lecture 16 Homework 3 is due today, 11:59 PM Homework 4 will be assigned today Due Sat, Jun 4, 11:59 PM Reading: Chapter 15: Learning to Classify Chapter 16: Classifying
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 09 130219 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Feature Descriptors Feature Matching Feature
More informationRequirements for region detection
Region detectors Requirements for region detection For region detection invariance transformations that should be considered are illumination changes, translation, rotation, scale and full affine transform
More information