Detecting Multiple Symmetries with Extended SIFT

Similar documents
Detecting Bilateral Symmetry in Perspective

Mirror-symmetry in images and 3D shapes

SYMMETRY DETECTION VIA CONTOUR GROUPING

Chapter 3 Image Registration. Chapter 3 Image Registration

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

Multi-Scale Kernel Operators for Reflection and Rotation Symmetry: Further Achievements

2D Image Processing Feature Descriptors

Genetic Fourier Descriptor for the Detection of Rotational Symmetry

DescriptorEnsemble: An Unsupervised Approach to Image Matching and Alignment with Multiple Descriptors

Evaluation and comparison of interest points/regions

CSE 252B: Computer Vision II

CS231A Midterm Review. Friday 5/6/2016

OBJECT detection in general has many applications

Local invariant features

arxiv: v1 [cs.cv] 28 Sep 2018

Using Geometric Blur for Point Correspondence

Local features: detection and description. Local invariant features

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

Scale Invariant Feature Transform

Motion Estimation and Optical Flow Tracking

SYMMETRY-BASED COMPLETION

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

Object and Class Recognition I:

Instance-level recognition part 2

Viewpoint Invariant Features from Single Images Using 3D Geometry

The SIFT (Scale Invariant Feature

Improving Shape retrieval by Spectral Matching and Meta Similarity

Combining Appearance and Topology for Wide

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

Local features and image matching. Prof. Xin Yang HUST

Lecture 8: Fitting. Tuesday, Sept 25

Lecture 8 Fitting and Matching

CSE 527: Introduction to Computer Vision

Flexible Calibration of a Portable Structured Light System through Surface Plane

Instance-level recognition

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Previously. Part-based and local feature models for generic object recognition. Bag-of-words model 4/20/2011

Instance-level recognition

Efficient Symmetry Detection Using Local Affine Frames

Part-based and local feature models for generic object recognition

Stereo Vision. MAN-522 Computer Vision

Stereo Image Rectification for Simple Panoramic Image Generation

Lecture 9 Fitting and Matching

3D shape from the structure of pencils of planes and geometric constraints

CS4670: Computer Vision

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing

Shape Descriptor using Polar Plot for Shape Recognition.

A Survey of Light Source Detection Methods

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Translation Symmetry Detection: A Repetitive Pattern Analysis Approach

Large-Scale 3D Point Cloud Processing Tutorial 2013

Instance-level recognition II.

Camera Geometry II. COS 429 Princeton University

Scale Invariant Feature Transform

Outline 7/2/201011/6/

Feature Detectors and Descriptors: Corners, Lines, etc.

Joint Vanishing Point Extraction and Tracking. 9. June 2015 CVPR 2015 Till Kroeger, Dengxin Dai, Luc Van Gool, Computer Vision ETH Zürich

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

A Symmetry Operator and Its Application to the RoboCup

Shape Matching. Brandon Smith and Shengnan Wang Computer Vision CS766 Fall 2007

Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds

Digital Image Processing

Image correspondences and structure from motion

Computer Vision I - Appearance-based Matching and Projective Geometry

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Research on Multi-sensor Image Matching Algorithm Based on Improved Line Segments Feature

Expanding gait identification methods from straight to curved trajectories

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

Motion Tracking and Event Understanding in Video Sequences

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

A Factorization Method for Structure from Planar Motion

Prof. Feng Liu. Spring /26/2017

APPLICATION OF RADON TRANSFORM IN CT IMAGE MATCHING Yufang Cai, Kuan Shen, Jue Wang ICT Research Center of Chongqing University, Chongqing, P.R.

Fitting: Voting and the Hough Transform April 23 rd, Yong Jae Lee UC Davis

Absolute Scale Structure from Motion Using a Refractive Plate

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Key properties of local features

Patch-based Object Recognition. Basic Idea

Automatic Feature Extraction of Pose-measuring System Based on Geometric Invariants

COMPUTER AND ROBOT VISION

Properties of Quadratic functions

Structure from motion

Detecting mirror-symmetry of a volumetric shape from its single 2D image

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford

Local features: detection and description May 12 th, 2015

Lecture 16: Object recognition: Part-based generative models

Computer Vision for HCI. Topics of This Lecture

Planar Symmetry Detection by Random Sampling and Voting Process

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

Detection of Mirror Symmetric Image Patches

Classifying Images with Visual/Textual Cues. By Steven Kappes and Yan Cao

University of Southern California, 1590 the Alameda #200 Los Angeles, CA San Jose, CA Abstract

Image matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian.

Local Feature Detectors

Announcements. Recognition (Part 3) Model-Based Vision. A Rough Recognition Spectrum. Pose consistency. Recognition by Hypothesize and Test

EE795: Computer Vision and Intelligent Systems

Requirements for region detection

Transcription:

1 Detecting Multiple Symmetries with Extended SIFT 2 3 Anonymous ACCV submission Paper ID 388 4 5 6 7 8 9 10 11 12 13 14 15 16 Abstract. This paper describes an effective method for detecting multiple bilateral symmetries of planar objects under perspective projection. The method can detect multiple symmetrical objects and the multiple symmetry axes of each object. In this paper, an extended SIFT feature called pseudo-affine invariant SIFT is proposed for detecting symmetric feature pairs that show different appearance in images due to perspective projection. Candidates of symmetry axes are obtained by finding the two projected mid-points of every two symmetric pairs based on the cross-ratios of four points on a line. The symmetry axis candidate that has the greatest number of symmetric pairs fitting it is detected as the most relevant symmetry axis of a symmetrical object. Other symmetric axes of the object are detected from the symmetric pairs belong to the symmetry axis. The procedure is applied repeatedly to the symmetric pair set after eliminating the ones of the detected symmetrical object to detect all symmetrical objects. 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 1 Introduction Bilateral reflection symmetry exists in many artificial objects, animals and plants. By detecting the multiple symmetries of objects, not only the global geometric properties such as symmetry axes can be obtained, but also the sets of pairs of image features at the symmetric positions on each symmetrical object can be obtained. This also has the effects of grouping image features and establishing pair-wise correspondence between features scattered in an image and thus, provides rich information about the global structure of objects. This paper proposes a powerful method for detecting multiple bilateral symmetries of planar objects under perspective projection, and grouping the associated symmetric pairs of features from a single view. We extend SIFT feature descriptor so that it can cope with affine transformations. This extension greatly increases the matching capability of the SIFT feature descriptor, and the symmetric pairs can be detected much more efficiently than the original SIFT while keeping low false positive detection rate. The candidates of symmetry axes are estimated by finding the two projected midpoints of every two symmetric pairs based on the cross-ratios of four points on a line. The symmetry axis candidate that has the greatest number of symmetric pairs fitting it is detected as the symmetry axis of a symmetrical object. All symmetry axes of the symmetrical object are detected by detecting the candidates of symmetry axes from the symmetric pairs associated with the object then selecting all the ones that have large number of symmetric pairs fitting them. This procedure is then applied repeatedly to the remained symmetric pairs after eliminating the ones associated with the detected symmetrical object to detect all symmetrical objects. The method can cope with the

2 ACCV-10 submission ID 388 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 perspective distortions of not only the geometry arrangement, but also the appearance, of the symmetric pairs of features. The method is able to detect multiple symmetrical objects and the multiple symmetries within the same object. This method does not require that the geometry arrangement, the appearance, and the feature descriptor of the symmetric pairs of features remain symmetrical in input images. 2 Background There have been a lot of works dealing with the detection of symmetry in the computer vision for several decades (e.g. [1][2][3][4][5][6][7][8][9][10]. Symmetry detection has been used for many applications; including reconstruction[11][12], pattern recognition[13] and stereo vision[14]. Liu et al.[15] used edge-based features and exhaustive search to identify all single reflection symmetries. Since the algorithm was designed for analyzing artistic paper cutting patterns, the difficulty caused by perspective or affine distortion, image noise or complex background was not considered. Loy et al.[4] used SIFT to detected feature points and to find the pair wise matching. The symmetry was detected by voting for symmetry foci with Hough transform. It can detect reflection and rotation symmetry. In the algorithm of finding pair wise matching and estimating symmetric axes, the method used the orientations and the positions of image features directly without considering perspective or affine distortion of symmetrical objects. In contrast our method takes account of such distortions both in finding symmetric pairs and in estimating symmetry axes. Cornelius et al.[5] used several feature detection methods including SIFT, hessianaffine and Harris affine detector, to detect feature points and to find pair wise matching. The symmetry was detected by voting for symmetry foci with Hough transform. It can detect reflection symmetry in perspective projection. However, the detail of the algorithm for finding pair wise matching was not given in the paper. 3 Pseudo-affine invariant SIFT features for detecting symmetric pairs Due to perspective projection, the image patches of the parts of a symmetrical object at symmetrical positions show different appearances. The appearances may be different in scale, orientation and skew, as well as reflection. In order to detect symmetric pairs of feature points in images, we need a feature detector that can detect distinctive points with good repeatability, and give feature descriptors that can be used for estimating the similarities between features in the case that the perspective distortion occurs. Many symmetry detection methods used SIFT feature detector because of its good performance for detecting features and its information rich feature descriptor. While the SIFT feature descriptor is scale- and rotation-invariant, it is not invariant for skew distortion. Here we extend the SIFT descriptor so that it can be used to find pair wise matching among image patches that contain scale, rotation and skew distortion. We first use the SIFT to detect feature points in images. The SIFT gives the orientation and the scale of features in addition to the positions. In order to give the SIFT

ACCV-10 submission ID 388 3 81 82 83 84 85 86 87 88 89 90 feature descriptor the ability to estimate the similarity between two image features that contain skew distortions, we define several skewed coordinates systems and their mirrored versions (by flipping about the Y axis) for the image patch that was used to compute the SIFT descriptor (See figure 1). The Y axis is aligned to the orientation of the detected feature. We quantize and enumerate the angle between X-Y axes of the skewed coordinates systems. In the experiments, we let the angles be 60, 90 and 120 degrees. Each angle is then used to define a skewed coordinates system, which is then used normalize the image patch so that the skewed axes become orthogonal. We use each of the normalized image patches to re-calculate a SIFT descriptor. Thus, each feature point (p) will have a set of SIFT descriptors ( f(p) ). f(p) ={f i (p), m i (p)}, i =1, 2, 3. (1) 91 92 93 where f i (p) and m i (p) is the SIFT descriptor (a feature vector of 128 dimensions) computed using i-th skewed coordinates system and its mirrored version, respectively. We call this extended SIFT as pseudo-affine SIFT features or simply PA-SIFTs. Fig. 1. Definition of pseudo-affine SIFT features (PA-SIFTs). 94 95 96 97 98 4 Detecting symmetric pairs with PA-SIFTs In order to detect symmetric pairs of features, the difference between any two features is estimated by considering the feature difference and the scale difference between them. The feature difference between two feature points p and q is computed from the PA- SIFTs of them as F (f(p), f(q)) = min ( f i(p) m j (q), m i (p) f j (q) ). (2) i {1,2,3},j {1,2,3}

4 ACCV-10 submission ID 388 99 100 101 102 The scale difference between two feature points p and q is computed from the scales of them obtained from the SIFT detector as S(p, q) = max(s(p),s(q)) min(s(p),s(q)), (3) where s(p) and s(q) are the scales of p and q, respectively. The difference between two feature points p and q defined by D(p, q) =F (f(p), f(q))s(p, q). (4) 103 104 105 106 If D(p, q) is less than a predefined threshold T D, then the pair of feature p and q is detected as a symmetric pair. Figure 4 shows an example result. The detected feature points are shown in figure 4(a), and the detected symmetric pairs are shown in figure 4(b). 107 108 109 110 111 112 113 114 115 116 117 118 5 Detecting multiple symmetries in an image After the symmetric pairs were detected, we estimate a symmetry axis candidate from every two symmetric pairs by assuming that they are on the same symmetrical planar object. For each candidate, we estimate an average fitness of all symmetric pairs to it. A symmetrical object can be detected by finding the symmetry axis candidate that has the highest evaluation. The symmetric pairs on this symmetrical object can be obtained by selecting the symmetric pairs that have high fitness value to the detected symmetry axis. Then we use these symmetric pairs to detect multiple symmetry axes of the detected symmetrical object. In order to detect other symmetrical objects in the image, we discard all pairs belong to the detected symmetrical object then apply the procedure for detecting one symmetrical object to the remained symmetric pairs again and again until no more candidates symmetry axes with high fitness can be found. 119 120 121 122 123 124 125 126 127 128 129 130 5.1 Detecting symmetric axis candidates from symmetric pairs After detecting symmetric pairs, we detect an axis candidate of symmetry from each two symmetric pairs. Fig.2 shows two symmetric pairs {p i, q i } and {p j, q j } on the same symmetrical planar object thus sharing the same symmetry axis. This symmetry axis can be determined by estimating the two projected midpoints of 3D points of two pairs. Since the lines connecting symmetric pairs are parallel in 3D space, the vanishing point S ij of them is their intersection. The projected midpoints of the two symmetric pairs can be computed by using this vanishing point based on cross-ratios of four points on a line in perspective images. Let m i be the projected mid-point of {p i, q i }, and M i, S ij, P i, Q i be the 3D point of m i, p i, q i, respectively. According to the cross-ratios of four points on a line in perspective images, we have the following equations. q i p i s ij m i q i m i s ij p i = Q i P i S ij M i Q i M i S ij P i, (5)

ACCV-10 submission ID 388 5 Fig. 2. An axis candidate of symmetry is estimated from two symmetric pairs. 131 s ij m i = q i m i + s ij q i Q i P i =2 Q i M i S ij M i = Q i M i + S ij P i =2 Q i M i + 132 Solving eq.(5) and eq.(6) we obtain: q i m i = q i p i s ij q i 2 s ij p i q i p i (6) (7) 133 The projected mid-point m i can be obtained as m i = q i s ij q i 2 s ij p i q i p i (q i p i ). (8) 134 Similarly, we also have m j = q j s ij q j 2 s ij p j q j p j (q j p j ). (9) 135 136 137 The line connecting m i and m j describes the symmetry axis. We use this line l ij and the vanishing point s ij that indicates the 3D orientation of the lines connecting the symmetric pairs, to describe the symmetry axis. M ij = {l ij, s ij } (10) 138 139 140 141 5.2 Detecting the most relevant symmetry axis and the associated symmetric pairs We estimate a set of candidates of symmetry axes and their parameters as described in sub-section 5.1 by using all combinations of any two symmetric pairs. For each

6 ACCV-10 submission ID 388 142 143 144 symmetry axis candidate, we estimate a degree for every symmetric pair that indicates how well it fits the symmetry axis candidate by considering the orientation of the line connecting the symmetric pair and the position of its projected midpoint. (a) (b) Fig. 3. Estimating the fitness of {p, q} to the symmetry axis l ij. 145 146 147 148 149 150 As shown in Fig.3(a), {p, q} is a symmetric pair, l ij is a symmetry axis candidate and s ij is the vanishing point of the symmetric pairs used for estimating l ij.if{p, q} is a symmetric pair of l ij then the line connecting them should pass s ij. Here we compute two angles θ p and θ q : { θp = s ij pq (11) θ q = π s ij qp We define an angle θ e that describes the difference between the 3D orientation of the line {p, q} and the vanishing point s ij as follows. θ e =max(θ p,θ q ) (12) 151 152 153 154 155 156 157 As shown in figure 3(b), M p is the intersection of the line connecting {s ij, p} and l ij, and M q is the intersection of the line connecting {s ij, q} and l ij.ifpand q is a symmetric pair of l ij then M p and M q should overlap at the projected mid-point of p and q. Since any feature point in a symmetric pair can be computed from the other feature point, the projected mid-point and the vanishing point based on the cross-ratios, the correct position of p and q can be computed as p and q from {s ij, q, M q } and {s ij, p, M p }, respectively. We compute the two distances: { dp = p p d q = q q (13) 158 159 160 161 In the case that p and q is a symmetric pair of l ij, p and q should overlap p and q, respectively and both d p and d q should be 0. We define a normalized distance d e that describes how well the symmetric pair fits l ij by considering the position of its projected midpoint as follows. d e = max(d p,d q ) d pq (14)

ACCV-10 submission ID 388 7 162 163 164 Here, d pq = p q is for normalization. We use Mahalanobis distance to describe how well that the symmetric pair {p, q} fits the symmetry axis described by M ij as follows. ( v(m ij, p, q) =exp 1 ) 2 xt Σ 1 x, (15) 165 166 167 168 169 Where x =[θ e d e ] T. Assuming that the correlations between θ e and d e can be neglected, Σ can be expressed by a diagonal matrix as follows. [ ] σθ 0 Σ =, (16) 0 σ d where σ θ and σ d are the standard deviations of θ e and d e, respectively. In order to estimate how well that a symmetry axis candidate fits the symmetric pairs, we compute the average fitness to all symmetric pair. V (M ij )= nsp i=1 v(m ij, p i, q i ) n sp, (17) 170 171 172 where n sp is the number of detected symmetric pairs. We compute V for each symmetry axis candidate described by M ij and select the one with the greatest value as the most relevant symmetry axis of the symmetric planar object. M object = argmin M ij W m {V (M ij )}, (18) 173 174 175 176 177 178 where W m is the set of all candidates of symmetry axes. We then compute the fitness of each symmetric pair to M object with eq.(15) and select the ones as the symmetric pairs of the symmetric planar object if v n (M object ) > T sp. Here, T sp is a threshold value. Fig.4 shows an example result. Fig.4(b) shows the detected symmetric pais, and Fig.4(c) shows the detected symmetry axis and the symmetric pairs of it. (a) Detected feature points (b) Symmetric pairs (c) Symmetry axis and its symmetric pairs Fig. 4. Detecting the most relevant symmetry axis in an image.

8 ACCV-10 submission ID 388 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 5.3 Detecting multiple symmetries In this section we expand the approach described in the previous section to be able to find multiple symmetries in the same image. In general, two cases exist: - There are two or more symmetrical planar objects in an image. - An object has more than one symmetry axes. In order to detect multiple symmetrical objects in an image, one may simply repeat the process of finding the symmetry axes until the average fitness of the found symmetry axis is less than a certain threshold. The main drawback of this method is that the evaluation of the next symmetrical object is affected by symmetric pairs that were already used by the previous symmetrical object and therefore, it can decrease the effectiveness of the detection. To minimize the error of the next symmetrical planar objects detection; we propose to discard all pairs that belong to the previous symmetrical objects and to not take it into consideration in the evaluation of the next symmetrical objects. For each detected symmetrical object, we constitute a rectangular bounding box aligned to the symmetry axis that just contains all the symmetric pairs of the object. This bounding box is an approximation of the area of the detected symmetrical object. All symmetric pairs having at least one of the feature points inside this area are discarded. Then we re-find the most relevant symmetry axis from the existing symmetric pairs and so on. The process is repeated until the estimated symmetry axis has less than 2 pairs (this value is determined experimentally). After all symmetrical objects have been detected; we try to find all symmetry axes in each symmetrical object by using the symmetric pairs belonging to it. We first detect the most relevant symmetry axis which has the highest evaluation of the fitness. We determine the pair that belong this symmetry axis. Then we determine the second highest evaluation symmetry axis by using always all pairs belonging to the object and so on. We repeat the process until the evaluation is less than a percentage of the maximum evaluation (the first maximum). In this paper we choose the percentage of 80%. 206 207 208 209 210 211 212 213 6 Experimental results The effectiveness of the proposed method was confirmed through the experiments that used simulation images and real images. In the experiments, we let the number of feature points detected by the SIFT be about 500 points. The sizes of local image patches for computing PA-SIFTs was determined by the scale given by SIFT. The standard deviation of the angle error σ θ and the distance error σ d was set to 3.5( ) and 0.15(pixel) respectively according to the results of some preliminary experiments. The threshold T sp was set to 0.5. 214 215 216 217 218 6.1 The comparative experiments of detection accuracy In this experiment, we compared the performance of three feature descriptors for detecting symmetric pairs: (a) original SIFT, (b) mirrored SIFT and (c) PA-SIFTs. We detected the symmetric pairs and used them to detect the symmetry axes by using (a), (b) and (c). Then we compared the accuracy of the symmetry axes detected with (a), (b)

ACCV-10 submission ID 388 9 (a) Generating simulation images (b) Accuracy evaluation Fig. 5. The experiment for evaluating detection accuracy 219 220 221 222 223 224 and (c). The skewed images are generated by rotating the vertical axis of the original image by θ t.weletθ t be 50, 60, 70, 80, 90, 100, 110, 120, and 130( ), and generate nine simulation images of 640 480 pixel, which were used as input images. The detection accuracy was evaluated by using the inclination error θ e between the true symmetry axis and the detected symmetry axis, and intercept error d e on the original vertical axis of the image (see Fig.5(b)). { θe = θ t θ d d e = d (19) Table 1. Detection accuracy (d e: pixel, θ e: ) mean Maximum Minimum Standard deviation (a) d e 16.790 57.768 0.266 20.702 θ e 11.955 48.565 0.049 15.705 (b) d e 5.764 28.730 0.054 8.886 θ e 10.842 51.024 0.040 17.763 (c) d e 3.120 11.852 0.005 3.854 θ e 1.577 3.910 0.053 1.350 225 226 227 228 229 230 231 232 233 234 The experimental results of detection accuracy using the feature descriptor (a), (b) and (c) are summarized in table 1. These results showed that our PA-SIFTs give significantly smaller detection errors than the original SIFT and the mirrored SIFT. This indicates that the detection of symmetrical planar objects using the PA-SIFTs is robust and sufficiently accurate for all simulation images. Figure 6 shows some experimental results of symmetrical planer object detection using our PA-SIFTs. In this figure, the true symmetry axis overlaps with the detected symmetry axis. We confirmed that the symmetric pairs and the symmetry axis were detected successfully in each simulation image.

10 ACCV-10 submission ID 388 θt = 50 θt = 60 θt = 70 θt = 80 θt = 90 θt = 100 θt = 110 θt = 120 θt = 130 Fig. 6. Example results of detecting symmetry using PA-SIFTs. 235 6.2 Experiments using real images 236 238 Figure 7 shows some results of the symmetry detection in real images. We confirmed that the symmetrical planar objects could be detected successfully with our method even when the images showed significant deformation due to perspective projection. 239 6.3 Experiments using images in common databases 240 243 We also tested our method with images selected from some common databases: Caltech256 Object Category Dataset [16] and MSRC Object class recognition database A and B1 [17]. Some results are shown in figure 8 where the symmetry axes and symmetric pairs were detected successfully. 244 7 Conclusion 245 In this paper, we have proposed an extended SIFT feature PA-SIFTs that can be used to detect symmetric pairs efficiently in perspective images. We also have proposed the method for detecting multiple bilateral symmetries of planar objects in perspective images, which can detect the multiple symmetrical objects and all the symmetry axes in echo of the symmetrical object. 237 241 242 246 247 248 249

ACCV-10 submission ID 388 11 Fig. 7. Many example results of symmetry detection. 253 Through the comprehensive experiments using not only the simulation images but the real images and the images in common databases, we have confirmed our method could detect symmetry axes and symmetric pairs of planar objects robustly and accurately in various input images. 254 References 250 251 252 255 256 257 258 259 260 261 262 263 264 265 266 267 1. M. Part, S. Lee, P.C.C., et al.: Performance evaluation of state-of-the-art discrete symmetry detection algorithm. Proc. of CVPR (2008) 1 8 2. Gupta, R., Mittal, A.: Illumination and affine- invariant point matching using an ordinal approach. Proc. of ICCV (2007) 1 8 3. Raymond, Y., Tam, P.: Application of elliptic fourier descriptors to symmetry detection under parallel projection. TPAMI 16 (1994) 277 286 4. Loy, G., Eklundh, J.: Detecting symmetry and symmetric constellations of features. Proc. of ECCV 3952/2006 (2006) 508 521 5. Cornelius, H., Loy, G.: Detecting bilateral symmetry in perspective. Proc. of POCV (2006) 6. Marola, G.: A technique for finding the symmetry axes of implicit polynomial curves under perspective projection. TPAMI 27 (2005) 465 470 7. Ha, V., Moura, J.: Affine-permutation invariance of 2-d shapes. Trans. on Image Processing 14 (2005) 1687 1700

12 ACCV-10 submission ID 388 MSRC object categorty dataset B1 (Category: Face) Caltech-255 (Category: Tower) Caltech-255 (Category: Umbrella) Fig. 8. Experimental results using images in common databases. 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 8. J.Podolak, P., et al.: A planar-reflective symmetry transform for 3d shapes. Proc. of ACM SIGGRAPH (2006) 549 559 9. H. Zabrodsky, S.P., Avnir, D.: Symmetry as a continuous feature. Trans. PAMI 17 (1995) 1154 1166 10. D. Reisfeld, H.W., Yeshurun, Y.: Context-free attentional operators: The generalized symmetry transform. IJCV 14 (1995) 119 130 11. K. Huang, W.H. Li, A.Y., Ma, Y.: On symmetry and multiple-view geometry:structure,pose,and calibration from a single image. IJCV 60 (2004) 241 265 12. D.Q.Huynh: Affine reconstruction from monocular vision in the presence of a symmetry plane. Proc. of ICCV 1 (1999) 476 482 13. Bigun, J.: Pattern recognition in images by symmetries and coordinate transformations. Computer Vision and Image Understanding 3 (1997) 290 307 14. Li, W., Kleeman, L.: Fast stereo triangulation using symmetry. Australasian Conference on Robotics and Automation (2006) http://www.araa.asn.au/acra/acra2006/contents.html. 15. Y.Liu, J.Hays, Y., Shum, H.: Digital papercutting. SIGGRAPH Technical Sketch, ACM (2005) 16. G. Griffin, A.H., Perona, P.: Caltech-256 object category dataset. Technical Report 7694 (2007) California Institute of Technology. 17. J. Winn, A.C., Minka, T.: Object categorization by learned universal visual dictionary. Proc. of ICCV 2 (2005) 1899 1807