Extraction of 3D Transform and Scale Invariant Patches from Range Scans

Size: px
Start display at page:

Download "Extraction of 3D Transform and Scale Invariant Patches from Range Scans"

Transcription

1 Extraction of 3D Transform and Scale Invariant Patches from Range Scans Erdem Akagündüz Dept. of Electrical and Electronics Engineering Middle East Technical University ANKARA 06531, TURKEY İlkay Ulusoy Dept. of Electrical and Electronics Engineering Middle East Technical University ANKARA 06531, TURKEY Abstract An algorithm is proposed to extract transformation and scale invariant 3D fundamental elements from the surface structure of 3D range scan data. The surface is described by mean and Gaussian curvature values at every data point at various scales and a scale-space search is performed in order to extract the fundamental structures and to estimate the location and the scale of each fundamental structure. The extracted fundamental structures can later be used as nodes in a topological graph where the links between the nodes can be defined as the spatial and geometric relations between the fundamental elements. 1. Introduction With the fast increase in the usage of 3D range scanners in many applications such as robotics and biometrics, the need to process 3D point cloud data in order to extract representative information from the scanned scene has become very important. The 3D points in the 3D point cloud are samples from the surface of the scanned scene and are usually uniform. There have been many studies performed for industrial parts which usually have flat surfaces [33] and for volumetric primitives such as cone and cylinder [5] but when the scanned scene includes free form shapes, arbitrary smooth surfaces are needed to be described. Especially when the task is face recognition from 3D scanner outputs, it is vital to extract descriptive surface information both to segment the face from the background and also to differentiate a face from another one. Reliable curvature estimation is an important goal in scene analysis to provide viewpoint independent cues for shape classification [12]. Concepts and techniques from differential geometry provide several measures of curvature including Gaussian and mean curvatures in describing the shape of arbitrary smooth surfaces arising in range images. Combination of Gaussian and mean curvature values enables the local surface types to be categorized that are invariant to rotations and translations [1]. Also, such local surface descriptions are better than global shape descriptors for shape matching schemes when occlusion exists. Although there are very successful global surface shape descriptors such as Extended Gaussian Images (EGI) [14] they have difficulty leveraging subtle shape variations, especially with large parts of the shape missing from the scene and they have to be processed further in order to be used for shape matching in occluded and cluttered scenes. There are many studies for local surface description. In [14] spin-image was used as a data level shape descriptor where the coordinates (alpha and beta) were computed for a vertex in the surface mesh that is within the support of the spin image. The support distance is a parameter which sets the maximum distance between the oriented point and a point contributing to the spin image and is analogous to window size in 2D template matching which has a direct relation to the scale of the analysis. Thus, spin image algorithm is sensitive to variation in data resolution. In [18] spin image was processed to become a scale invariant one. In [1] the signs of mean curvature (H) and Gaussian curvature (K) were used to determine a surface type for each pixel. An alternative for curvature estimation was defined in [17] where the shape (S) and the magnitude of the curvedness (C) are decoupled. Both HK and SC methods were compared in [4] in terms of their shape classification performance, their response to thresholding and their tolerance to noise. There are many other curvature estimation methods. In [8] Darboux frames which describe the orientation, principle curvatures and directions at a point on a surface were used. In [18] curves and surfaces were estimated using B-splines. In [30] the tensor voting formalism was used to infer the sign and direction of principal curvatures. Due to the need for second derivatives, surface curvature estimates are very sensitive to quantization noise. In [9] surface fitting and numerical curvature estimates were compared in terms of their tolerances to noise. They concluded that the numerical curvature estimation methods perform about as accurately as the analytical techniques and it is very important to apply surface smoothing before any type of curvature estimation /07/$ IEEE

2 algorithm. However, although the selected mask (kernel or filter) size which defines the extent of smoothing and curvature estimation is very effective on the outcomes of the methods, they did not compare the methods with respect to various sizes. In [11] a method to infer surfaces while detecting intersections between surfaces and 3D juctions was proposed. Such qualitative segmentation of the range image into a set of volumetric parts not only captures the coarse shape of the parts, but qualitatively encodes the orientation of each part through its aspect. When the 3D scene is segmented into consistent surfaces then object recognition can be performed by matching these surfaces to the model (or database) based on their surface descriptors. There are many other methods which consider surfaces altogether with their boundaries and junctions [1], [29], [6]. However, in most of the surface segmentation studies, the main problem is that the curves and junctions among surfaces are not well localized. This problem is mostly related to the size of the mask used which has a direct link to the scale of the analysis. The scale problem that we mentioned above for the cases of curvature estimation and surface segmentation can be handled if a multi-scale analysis is performed on the surface structure. Most of the studies which consider multi-scale analysis perform scale-space processing which is concerned with obtaining more than one set of results for a single data, by processing the data at different resolutions [20]. Traditionally, a coarse to fine processing is performed in order to combine the results across scales. This also has some physiological and psychological roots since humans retrieve the coarse geometric shapes first and then more details are added to refine the judgment if needed. In [13] surface models from low to high details in scale space were used for modeling purposes where the aim was to model a 3D scene without losing any details but by including the least number of polyhedral surfaces. There are many other similar methods which use multiscale analysis for modeling purposes [35], [31]. In [33] multi-scale curvature computation was achieved by convolving local parameterizations of the surface iteratively with 2D Gaussian filters and by locating local maxima of Gaussian and mean curvatures at each scale which were then used as significant and robust feature points. Also, zero crossings of Gaussian and mean curvatures were used for segmenting surfaces into regions. Although significant points and their boundaries were extracted at different surface resolutions, information between different resolutions was not combined and these features were left for further processing for robust surface matching. In [2] a solution for matching using features at different scale levels was proposed but for CAD databases only. However, it is a promising approach where a graph structure is constructed for the object at various scales and those graphs are combined under a root. During matching, the multi-scale graph structure of the object was searched through the database. The most problematic assumption made here was that the graph structure of the object and the models in the database were available at every scale. Formulation of 3D object recognition problem as that of graph matching is an appreciated method in computer vision community. The nodes in the graph represent 3D features or their abstractions and edges represent spatial, geometric or hierarchical relations between the features. The relations can include other types of information as well. For example, segmented surfaces described by their curvature properties were used as nodes and the relation between them were viewed as links in attributed graphs for only industrial objects in [33] and for general objects in [7]. In [5] volumetric primitives (cone, cylinder, etc.) were extracted and their connectivity relations were used in a graph structure. The topic of graph matching in computer vision has been studied extensively, with both exact and inexact graph matching algorithms applied to object recognition, including [10], [16], [22], [24], [25], [26], [27], [28], [32] to name just a few. In this study, we also apply multi-scale analysis to 3D point cloud data but not as a coarse to fine approach (which was the main trend in earlier studies) but as a scale space where we can search for fundamental elements (significant locations) and their exact scales on the surface. Thus, it is a scale invariant approach, since we locate significant surface structures with their scale information. In doing this, we were inspired from the work of Lowe [21] and Mikolajczyk and Schmid [23] where they located significant points irrespective of their scales on 2D images. In our 3D case, the fundamental elements are taken as the peaks, pits and saddles which are more realistic structures for natural 3D objects such as human faces. For the extraction of local surface structures we use mean and Gaussian curvature values which are invariant to rotations and translations. We do not perform a surface segmentation in order to describe a scene as proposed by earlier studies where border localization was a very critical issue in segmentation but we describe the scene by its fundamental elements. The configuration of these fundamental elements may later be formulated as a graph where the nodes can be the fundamental structures and the links can be directed and carry relative positional or geometric relations between the nodes. Graph structure and graph matching are out of the scope of this paper and may be done by using one of the successful algorithms suggested in the literature listed in the previous paragraph and left for our future work. The outline of the paper is as follows: In section 2, we commence by detailing steps of scale space analysis including HK map calculation, Gaussian pyramiding and fundamental element labeling. Section 3 discusses the scale estimation in UVS space where u and v are used for surface dimensions and s is for scale. Section 4

3 demonstrates the results on natural objects. Finally, Section 5 offers some conclusions and directions for future research. 2. Surface Representation We begin by introducing our method for finding the fundamental elements on the input surface. Then we explain Gaussian pyramid construction which is computed by Gaussian filtering and down sampling. Finally, we introduce our scale space Extraction of Fundamental Surface Elements In their original work [1], Besl and Jain calculated mean and Gaussian curvatures on the surface and used tolerance signum functions sgn ЄH (H(i,j)) and sgn ЄK (K(i,j)) with preselected zero thresholds to label the fundamental elements on the surface. We use exactly the same method to detect peaks, pits and saddle regions with the threshold values very close to the ones defined by them (We used Є H =0.03 and Є K =0.005). In order to calculate the Gaussian and mean curvatures we fit third order 3x3 B-Spline surfaces on each point, which is the closest analytical technique to using a numerical estimate, as it approximates the derivation with difference operations. In Figure 1, HK map for a facial surface is shown where blue regions are the peaks, cyan regions are the pits, red regions are the saddle ridges and yellow regions are the saddle valleys Gaussian Pyramiding Estimating surface labels at the given resolution of the original surface, restricts us to find the fundamental elements only at the lowest scale. In order to detect other curvatures which have higher scales, a scale space representation must be introduced. For this purpose, we first generate a Gaussian pyramid of the input surface using the Reduce function from the original work of Burt and Adelson [3] where a Gaussian pyramid is constructed by smoothing and down sampling the data into its half at each Reduce operation. Therefore the size of each fundamental element on the surface is halved between successive scales. Then we calculate the HK maps for each scale in the pyramid and obtain a new pyramid of HK maps where 3x3 window size is used at every scales (Figure 2). As can be seen from Figure 2, at the higher levels of the pyramid, the smaller surface elements vanish and bigger elements reside. Finally, we expand the higher levels of the HK pyramid to the original size by using Expand function. After this expansion any label on the s th level will widen 2 S times. Figure 1: Peaks (blue), pits (cyan) and saddles (red) on the surface of a face UVS Space Figure 2: Gaussian Pyramid of the HK maps. Putting each level of the expanded HK pyramid on top of each other, we obtain a 3D volume where u and v are the surface dimensions and s is the scale dimension. In order to extract fundamental surface elements we follow several steps. First, we check each voxel of the 3D volume for its similarity within its 10 neighbors (8 at the same level, 2 up and down through scale). If all labels of the neighbors are the same, the center voxel continues to carry its label. Otherwise it becomes a blank voxel. After voxel labeling, we apply erosion and dilation operations respectively to extract connected labels in the 3D volume. Finally, each connected component inside the u-v-s volume represents a fundamental element on the surface and becomes a node of our topology graph. Figure 3 illustrates the u-v-s space for the face surface given in Figure 2. It is easily seen that the connected components of a fundamental element, for example the nose, has components at a number of successive layers. Remember that the points in the components are found by applying some threshold to H and K values at each scale level. In Figure 4 the points are given with their actual values above the threshold. The bigger the value is the lighter the color becomes. For example, the change in the value of the curvature for the pits of the eyes can be observed from its color which changes from light cyan at the smaller scales to dark cyan at the larger scales. Examining the color change for the eye pits, one can decide that the scale of the eye pit is between level 2 and 3 of the pyramid. Localization of scale information in scale-space is explained in the next section.

4 Figure 3: UVS volume after morphological operations printed above the surface. connected component using the 2 nd norm of the absolute differences of the curvature values: 1 2 ( H ) + ( ) ) 2 2 w (3) i, j = i, j H K i,j K Figure 4: A closer view of the UVS space in Figure 3. The colors label the surface labels (cyan for pits, red for peaks and blue for saddles). The lighter color means that the absolute difference between the curvature value and the threshold is bigger. 3. Scale-Space Localization Since we use Gaussian smoothing to construct the scalespace volume, the relation between the smoothing kernel size and the scale dimension of the volume is as follows: σ B ( S A S B ) = 2 (1) σ A This relation is verified on synthetic Gaussian surfaces at different scales. Figure 5 shows UVS spaces for three synthetically generated unit volume Gaussian surface models. Each Gaussian surface has the half standard deviation of the one on its right. Then according to (1) the scale values for these Gaussian surfaces must have the following relation: σ B ( S A S B ) = 2 = 2 S A SB = 1 σ A (2) The location and scale of a fundamental element in the UVS volume are estimated by computing the weighted average of the curvature values of the elements (voxels) covered by the connected component defining that fundamental element (peak for synthetic Gaussians). A weight value is assigned to each voxel inside the After computing the weighted averages we observe that the estimated locations are the centers of the synthetic Gaussians and scales satisfy the relation given in (2) where S A = 3.86, S B = 2.80 and S C = Each connected component with its surface label, location and scale information may be represented as a node in a topology graph which is out of the scope of this paper. But we suggest that if the links between the nodes carry some relative information (for example ratio of the bigger node scale to the smaller node scale contributing to the link) the graph will become a directed one and this may help graph matching which is usually a hard problem for fully connected graphs. Figure 5: Three unit volume Gaussian surfaces at different scales (σ A = σ K, σ B = 2*σ K, σ C = 4*σ K ) and their respective UVS volumes. 4. Results Initially we have tested the algorithm on objects belonging to the same class and having nearly the same size and orientation. We have used range maps of three different facial surfaces for which the UVS volumes are shown above the facial surfaces in Figure 6. The nodes (fundamental elements) extracted for each of these surfaces are given in Table 1. Since they are similar surface structures, the extracted components have similar

5 Figure 6: UVS volumes and surfaces for three different facial models. Table 1: Extracted nodes from the surfaces in Figure 6. type U V S type U V S type U V S Pit 18,81 38,82 2,76 Pit 23,55 45,35 2,58 Pit 22,44 40,13 2,69 Left Eye Pit Pit 19,96 85,53 2,86 Pit 22,43 82,73 2,50 Pit 23,45 82,85 2,63 Right Eye Pit Peak 55,19 62,00 2,67 Peak 57,43 62,70 2,72 Peak 53,52 64,42 2,71 Nose Peak Peak 101,15 72,17 1,59 S. Rd. 19,59 64,30 2,83 S. Rd. 21,31 63,59 2,54 Nose Saddle Peak 28,30 102,36 1,53 Peak 104,70 72,04 1,90 Peak 98,38 71,52 1,66 Pit 20,15 3,97 2,00 Pit 30,53 4,54 2,29 Pit 24,02 3,89 2,00 Peak 28,93 22,47 2,00 Peak 105,58 54,38 2,36 Peak 98,65 54,00 2,14 Pit 97,76 32,53 2,53 Peak 88,87 63,00 2,00 Peak 85,74 55,69 2,25 S. Vl. 34,98 102,03 2,00 Peak 85,97 67,98 2,00 Peak 85,70 71,46 2,30 Pit 9,38 93,56 3,00 S. Vl. 116,52 77,21 2,00 Peak 29,93 101,48 2,00 Peak 100,76 53,62 1,60 S. Vl. 97,01 57,00 3,00 Pit 98,88 93,89 2,44 S. Vl. 116,96 63,99 3,00 Pit 19,19 51,00 1,00 S. Vl. 97,01 66,93 3,00 a) b) c) coordinates and scales. For the sake of clarity we placed some selected facial fundamental components such as the nose peak and the eye pits on the top rows in Table 1 where u, v and scale values for each component are given in the corresponding columns. As it can be seen from this table, these components have very similar scales and coordinates on the surfaces. Also, scale space of each face has some different fundamental structures from the others. For example, the face given in the middle of Figure 6 has an emphasized nose saddle (3 layers of red between the eye pits). Next, we have tested our algorithm against transformations and scaling. Figure 7 shows a facial surface, which has been scanned at different resolutions and poses. The frontal view of the original face is shown on the left. In the middle, the face is given at a different resolution and while it is rotated with respect to the scanner s viewing direction. Finally, on the right, the face scanned from a different viewing angle is given. Since this face has an out of plane rotation some facial surfaces are occluded. These surfaces are generated artificially from the original face scan. Likewise, Table 2 lists the nodes extracted from these surfaces and again important facial components are given on the top rows. The u-v coordinates and scales of these surfaces follow the transformations and scaling applied to the original face which also shows that the algorithm is invariant to scale and rotation.

6 Figure 7: UVS volumes and surfaces for scaled and rotated versions of a face. a. Original face, b. Smaller and in plane rotated face, c. Out of plane rotated face. Table 2: Extracted Nodes from the surfaces in Figure 7. type U V S type U V S type U V S Pit 79,20 35,17 2,64 Pit 45,16 68,88 2,45 Pit 64,39 40,68 3,00 Left Eye Pit 50,99 34,96 2,69 Pit 27,97 74,67 2,43 Pit 41,55 40,83 3,00 Right Eye Peak 66,25 64,56 2,90 Peak 41,79 86,39 3,04 Peak 60,95 62,84 2,93 Nose Peak 78,13 120,89 2,65 Peak 54,46 115,52 2,56 Peak 53,71 107,23 2,68 Chin Peak 56,42 121,63 2,46 Peak 44,85 57,48 2,62 Peak 35,34 25,76 2,42 Peak 35,51 17,26 2,36 Peak 18,91 66,34 2,73 Peak 21,98 26,02 2,00 Peak 98,81 20,12 2,70 Peak 59,28 82,70 2,00 Peak 29,66 38,00 2,00 Peak 35,05 32,15 2,00 Peak 41,10 102,08 2,00 Pit 81,20 20,49 3,00 S. Vl. 12,49 1,00 3,00 Pit 10,71 34,14 3,00 Peak 70,74 27,38 3,00 Pit 16,99 106,00 3,00 Pit 56,88 37,26 3,00 Pit 81,10 45,27 3,00 Pit 105,56 116,11 3,00 Pit 65,08 48,87 3,00 Peak 54,00 83,54 3,00 Pit 23,89 116,12 3,00 Pit 69,02 56,99 3,00 Pit 76,70 101,08 3,00 Pit 8,99 95,53 3,00 Pit 7,71 106,61 3,00 Pit 12,97 103,10 3,00 S. Vl. 57,00 127,01 3,00 Pit 20,80 116,85 3,00 S. Vl. 80,52 1,00 4,00 a) b) c) When examined in detail, one can see that the nose of the face given in Figure 7.b. has bigger scale than that of the noses given in Figure 7.a and 7.c. Even though this is something not expected, it has a reasonable explanation. In Figure 7.b, as the surface is processed for higher scales also, the face itself becomes a peak lying on the plane the face belongs to. This bump is at higher scales and has coordinates very close to that of the nose, since nose is in the middle of a face. However if they are taken as a single connected component, the scale of the peak at the coordinates of the nose becomes bigger. This phenomenon is expected for data with lower resolution. 5. Conclusion In this paper we have defined a method to extract transform and scale invariant elements from 3D surfaces using a multi-scale analysis technique. We have estimated mean and Gaussian curvature values for a given surface at various scales. Then, we have constructed Gaussian Pyramid of the surface in order to estimate the scale and coordinates of fundamental surface elements on the surface. We have built a UVS volume space where fundamental elements can be extracted by morphological operations and connected component search. Pyramid techniques are fast as the scale increases exponentially. However inspection of the UVS space by increasing the scale linearly might give a better localization in scale space. By obtaining such transform and scale invariant fundamental surface elements, registration of scanner outputs could be achieved easily. Also, by constructing topological graphs based on the fundamental surface elements and their relations, some very well known exact or inexact graph matching techniques could be applied for object recognition purposes. The features to be used for the nodes and the links in the graph structure and the graph matching methods form our future study topics.

7 Furthermore we believe that applying a probabilistic graph model would also be suitable when our algorithm is used for object recognition purposes. References [1] P. J. Besl and R. C. Jain, Segmentation Through Variable- Order Surface Fitting. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 10, no. 2, pp ,1988. [2] D. Bespalov, A. Shohoufandeh, W. C. Regli and W. Sun, Scale-Space Representation and Classification of 3D Models. Journal of Computing and Information Sceince Engineering, vol. 3, pp , [3] P. Burt and E. Adelson, "The Laplacian Pyramid as a Compact Image Code" IEEE Transactions on Communications, vol. COM-31, No. 4, [4] H. Cantzler and R. B. Fisher, Comparison of HK ans SC Curvature Description Methods. In Proc. Third International Conference on 3D Digital Imaging and Modeling, pp , [5] S. J. Dickinson, A. P. Pentland and A. Rosenfeld, 3D Shape Recovery Using Distributed Aspect Matching. IEEE vol. 14, no. 2, pp ,1992. [6] S. J. Dickinson, D. Metaxas and A. Pentland, The Role of Model-Based Segmentation in the Recovery of Volumetric Parts from Range Data. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 3, pp , [7] T. J. Fan, G. Medioni and R. Nevatia, Recognizing 3D Objects Using Surface Descriptions. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 11, pp , [8] F. P. Ferrie, J. Lagarde and P. Whaite, Darboux Frames, Snakes, and Super-quadratics: Geometry from the Bottomup. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 8, pp ,1993. [9] P. J. Flynn and A. K. Jain, On Reliable Curvature Estimation. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp , June [10] S. Gold and A. Rangarajan, A Graduated Assignment Algorithm for Graph Matching. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, no. 4, pp , [11] G. Guy and G. Medioni, Inference of Surfaces, 3D Curves and Junctions from Sparse, Noisy, 3D Data. IEEE vol. 19, no. 11, pp , [12] A. Hilton, J. Illingworth and T. Windeatt, Statistics of Surface Curvature Estimates. Pattern Recognition, vol. 28, no. 8, pp , [13] A. Hoover, D. Goldgof and K. W. Bowyer, Dynamic-Scale Model construction from Range Imagery. IEEE vol. 20, no. 12, pp , [14] B. K. P. Horn, Extended Gaussian images, In Proc. IEEE, vol. 72, pp , Dec [15] A. Johnson and M. Hebert, Efficient Multiple Model Recognition in Cluttered 3D Scenes. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, June [16] W. Kim and A. C. Kak, 3D Object Recognition Using Bipartite Matching Embedded in Discrete Relaxation. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 3, pp , [17] J. J. Koenderink and A. J. Doorn, Surface shape and curvature scales, Image Vis. Comput., vol. 10, no. 8, pp , 1992 [18] Q. Li, M. Zhou, and J. Liu, Multi-resolution mesh based 3D object recognition, in Proc. IEEE Workshop Computer Vision Beyond Visible Spectrum: Methods and Applications, in Conjunction With CVPR 2000, 2000, pp [19] C. W. Liao ang G. Medioni, Surface Approximation of a Cloud of 3D Points. In Proc. Second CAD-Based Vision Workshop, pp , Feb [20] T. Lindeberg, Scale-Space Theory in Computer Vision. The Netherlands: Kluwer Academic Publishers, [21] D. G. Lowe, "Distinctive image features from scaleinvariant keypoints," International Journal of Computer Vision, vol. 60, no. 2, pp , [22] B. Luo and E. R. Hancock, Structural Matching Using the EM Algorithm and Singular Value Decomposition. IEEE 23, pp , [23] K. Mikolajczyk and C. Schmid, Scale and affine invariant interest point detectors. International Journal of Computer Vision, vol. 60, no. 1, [24] M. Pelillo, K. Siddiqi and S. Zucker, Matching Hierarchical Structures Using Association Graphs. IEEE vol. 21, no. 11, pp , [25] L. G. Shapiro and R. M. Haralick, Structural Descriptions and Inexact Matching, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 3, pp , [26] A. Shkoufandeh, D. Marcini, S. Dickinson, K. Siddiqi and S. W. Zucker, Indexing Hierarchical Structures Using Graph Spectra. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 7, pp , [27] A. Shokoufandeh, S. Dickinson, C. Johnsson, L. Bretzner and T. Lindeberg, On the Representation and Matching of Qualitative Shape at Multiple Scales. In Proc., 7th European Conference on Computer Vision, vol. 3, pp , [28] K. Siddiqi, A. Shokoufandeh, S. Dickinson and S. Zucker, Shock Graphs and Shape Matching. International Journal of Computer Vision, vol. 30, pp. 1 24, [29] S. S. Sinha and P. J. Besl, Principal Patches: A Viewpoint- Invariant Surface Description. In Proc. IEEE International Conference of Robotics Automaion., pp. 7-11, May [30] C. K. Tang and G. Medioni, Curvature-Augmented Tensor Voting for Shape Inference from Noisy 3D Data. IEEE vol. 24, no. 6, pp , [31] W. S. Tong, C. K. Tang, P. Mordohai and G. Medioni, First Order Augmentation to Tensor Voting for Boundary Inference and Multiscale Analysis in 3D. IEEE vol. 26, no. 5, pp , 2004.

8 [32] R. C. Veltkamp, Shape Matching: Similarity Measures and Algorithms. In Proc. International Conference on Shape Modeling and Applications, pp , [33] G. Xu and X. Wan, Description of 3D Object in Range Image. In Proc. 9th International Conference on Pattern Recognition, vol. 1, pp , Nov [34] P. Yuen, F. Mokhtarian, N. Khalili and J. Illingworth, Curvature and Torsion Feature Extraction from Free-form 3D Mesh at Multiple Scales. In Proc. IEE Vision. Image Signal Processing, vol. 147, no. 5, pp , [35] H. Zha, S. Tahira and T. Hasegawa, Multi-Resolution Surface Description of 3D Objects by Shape-Adaptive Triangular Meshes. In Proc. 14th International Conference of Pattern Reconition., pp , 1998.

Multi-Scale Free-Form Surface Description

Multi-Scale Free-Form Surface Description Multi-Scale Free-Form Surface Description Farzin Mokhtarian, Nasser Khalili and Peter Yuen Centre for Vision Speech and Signal Processing Dept. of Electronic and Electrical Engineering University of Surrey,

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

3D Keypoints Detection for Objects Recognition

3D Keypoints Detection for Objects Recognition 3D Keypoints Detection for Objects Recognition Ayet Shaiek 1, and Fabien Moutarde 1 1 Robotics laboratory (CAOR) Mines ParisTech 60 Bd St Michel, F-75006 Paris, France Abstract - In this paper, we propose

More information

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)

More information

School of Computing University of Utah

School of Computing University of Utah School of Computing University of Utah Presentation Outline 1 2 3 4 Main paper to be discussed David G. Lowe, Distinctive Image Features from Scale-Invariant Keypoints, IJCV, 2004. How to find useful keypoints?

More information

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images 1 Introduction - Steve Chuang and Eric Shan - Determining object orientation in images is a well-established topic

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

Improving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries

Improving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,

More information

Curvature Estimation on Smoothed 3-D Meshes

Curvature Estimation on Smoothed 3-D Meshes Curvature Estimation on Smoothed 3-D Meshes Peter Yuen, Nasser Khalili and Farzin Mokhtarian Centre for Vision, Speech and Signal Processing School of Electronic Engineering, Information Technology and

More information

CAP 5415 Computer Vision Fall 2012

CAP 5415 Computer Vision Fall 2012 CAP 5415 Computer Vision Fall 01 Dr. Mubarak Shah Univ. of Central Florida Office 47-F HEC Lecture-5 SIFT: David Lowe, UBC SIFT - Key Point Extraction Stands for scale invariant feature transform Patented

More information

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale. Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe presented by, Sudheendra Invariance Intensity Scale Rotation Affine View point Introduction Introduction SIFT (Scale Invariant Feature

More information

Fitting: The Hough transform

Fitting: The Hough transform Fitting: The Hough transform Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not vote consistently for any single model Missing data

More information

Comparison of HK and SC curvature description methods

Comparison of HK and SC curvature description methods Comparison of and SC curvature description methods H. Cantzler and R. B. Fisher Machine Vision Unit, Institute for Perception, Action and Behaviour, Division of Informatics, University of Edinburgh, Edinburgh,

More information

Triangular Mesh Segmentation Based On Surface Normal

Triangular Mesh Segmentation Based On Surface Normal ACCV2002: The 5th Asian Conference on Computer Vision, 23--25 January 2002, Melbourne, Australia. Triangular Mesh Segmentation Based On Surface Normal Dong Hwan Kim School of Electrical Eng. Seoul Nat

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Features Points Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Finding Corners Edge detectors perform poorly at corners. Corners provide repeatable points for matching, so

More information

Model-based segmentation and recognition from range data

Model-based segmentation and recognition from range data Model-based segmentation and recognition from range data Jan Boehm Institute for Photogrammetry Universität Stuttgart Germany Keywords: range image, segmentation, object recognition, CAD ABSTRACT This

More information

Overview. Related Work Tensor Voting in 2-D Tensor Voting in 3-D Tensor Voting in N-D Application to Vision Problems Stereo Visual Motion

Overview. Related Work Tensor Voting in 2-D Tensor Voting in 3-D Tensor Voting in N-D Application to Vision Problems Stereo Visual Motion Overview Related Work Tensor Voting in 2-D Tensor Voting in 3-D Tensor Voting in N-D Application to Vision Problems Stereo Visual Motion Binary-Space-Partitioned Images 3-D Surface Extraction from Medical

More information

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1 Feature Detection Raul Queiroz Feitosa 3/30/2017 Feature Detection 1 Objetive This chapter discusses the correspondence problem and presents approaches to solve it. 3/30/2017 Feature Detection 2 Outline

More information

Key properties of local features

Key properties of local features Key properties of local features Locality, robust against occlusions Must be highly distinctive, a good feature should allow for correct object identification with low probability of mismatch Easy to etract

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

COMPUTER AND ROBOT VISION

COMPUTER AND ROBOT VISION VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington T V ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

Scale-invariant Region-based Hierarchical Image Matching

Scale-invariant Region-based Hierarchical Image Matching Scale-invariant Region-based Hierarchical Image Matching Sinisa Todorovic and Narendra Ahuja Beckman Institute, University of Illinois at Urbana-Champaign {n-ahuja, sintod}@uiuc.edu Abstract This paper

More information

Object Recognition with Invariant Features

Object Recognition with Invariant Features Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user

More information

Fitting: The Hough transform

Fitting: The Hough transform Fitting: The Hough transform Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not vote consistently for any single model Missing data

More information

The most cited papers in Computer Vision

The most cited papers in Computer Vision COMPUTER VISION, PUBLICATION The most cited papers in Computer Vision In Computer Vision, Paper Talk on February 10, 2012 at 11:10 pm by gooly (Li Yang Ku) Although it s not always the case that a paper

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

CS664 Lecture #21: SIFT, object recognition, dynamic programming

CS664 Lecture #21: SIFT, object recognition, dynamic programming CS664 Lecture #21: SIFT, object recognition, dynamic programming Some material taken from: Sebastian Thrun, Stanford http://cs223b.stanford.edu/ Yuri Boykov, Western Ontario David Lowe, UBC http://www.cs.ubc.ca/~lowe/keypoints/

More information

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing CS 4495 Computer Vision Features 2 SIFT descriptor Aaron Bobick School of Interactive Computing Administrivia PS 3: Out due Oct 6 th. Features recap: Goal is to find corresponding locations in two images.

More information

Segmentation of Range Data for the Automatic Construction of Models of Articulated Objects

Segmentation of Range Data for the Automatic Construction of Models of Articulated Objects Segmentation of Range Data for the Automatic Construction of Models of Articulated Objects A. P. Ashbrook Department of Artificial Intelligence The University of Edinburgh Edinburgh, Scotland anthonya@dai.ed.ac.uk

More information

Towards the completion of assignment 1

Towards the completion of assignment 1 Towards the completion of assignment 1 What to do for calibration What to do for point matching What to do for tracking What to do for GUI COMPSCI 773 Feature Point Detection Why study feature point detection?

More information

Implementing the Scale Invariant Feature Transform(SIFT) Method

Implementing the Scale Invariant Feature Transform(SIFT) Method Implementing the Scale Invariant Feature Transform(SIFT) Method YU MENG and Dr. Bernard Tiddeman(supervisor) Department of Computer Science University of St. Andrews yumeng@dcs.st-and.ac.uk Abstract The

More information

SCALE INVARIANT FEATURE TRANSFORM (SIFT)

SCALE INVARIANT FEATURE TRANSFORM (SIFT) 1 SCALE INVARIANT FEATURE TRANSFORM (SIFT) OUTLINE SIFT Background SIFT Extraction Application in Content Based Image Search Conclusion 2 SIFT BACKGROUND Scale-invariant feature transform SIFT: to detect

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

HOUGH TRANSFORM CS 6350 C V

HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges

More information

View-Based 3-D Object Recognition using Shock Graphs Diego Macrini Department of Computer Science University of Toronto Sven Dickinson

View-Based 3-D Object Recognition using Shock Graphs Diego Macrini Department of Computer Science University of Toronto Sven Dickinson View-Based 3-D Object Recognition using Shock Graphs Diego Macrini Department of Computer Science University of Toronto Sven Dickinson Department of Computer Science University of Toronto Ali Shokoufandeh

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

BMVC 2000 doi: /c.14.45

BMVC 2000 doi: /c.14.45 Free-Form -D Object Recognition at Multiple Scales Farzin Mokhtarian, Nasser Khalili and Peter Yuen Centre for Vision, Speech, and Signal Processing School of Electronic Engineering, IT and Mathematics

More information

Local qualitative shape from stereo. without detailed correspondence. Extended Abstract. Shimon Edelman. Internet:

Local qualitative shape from stereo. without detailed correspondence. Extended Abstract. Shimon Edelman. Internet: Local qualitative shape from stereo without detailed correspondence Extended Abstract Shimon Edelman Center for Biological Information Processing MIT E25-201, Cambridge MA 02139 Internet: edelman@ai.mit.edu

More information

Previously. Part-based and local feature models for generic object recognition. Bag-of-words model 4/20/2011

Previously. Part-based and local feature models for generic object recognition. Bag-of-words model 4/20/2011 Previously Part-based and local feature models for generic object recognition Wed, April 20 UT-Austin Discriminative classifiers Boosting Nearest neighbors Support vector machines Useful for object recognition

More information

Robotics Programming Laboratory

Robotics Programming Laboratory Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car

More information

A Curvature-based Approach for Multi-scale. Feature Extraction from 3D Meshes and Unstructured Point Clouds

A Curvature-based Approach for Multi-scale. Feature Extraction from 3D Meshes and Unstructured Point Clouds A Curvature-based Approach for Multi-scale 1 Feature Extraction from 3D Meshes and Unstructured Point Clouds Huy Tho Ho and Danny Gibbins Sensor Signal Processing Group School of Electrical and Electronic

More information

Coarse-to-Fine Search Technique to Detect Circles in Images

Coarse-to-Fine Search Technique to Detect Circles in Images Int J Adv Manuf Technol (1999) 15:96 102 1999 Springer-Verlag London Limited Coarse-to-Fine Search Technique to Detect Circles in Images M. Atiquzzaman Department of Electrical and Computer Engineering,

More information

Structured light 3D reconstruction

Structured light 3D reconstruction Structured light 3D reconstruction Reconstruction pipeline and industrial applications rodola@dsi.unive.it 11/05/2010 3D Reconstruction 3D reconstruction is the process of capturing the shape and appearance

More information

Patch-based Object Recognition. Basic Idea

Patch-based Object Recognition. Basic Idea Patch-based Object Recognition 1! Basic Idea Determine interest points in image Determine local image properties around interest points Use local image properties for object classification Example: Interest

More information

Correcting User Guided Image Segmentation

Correcting User Guided Image Segmentation Correcting User Guided Image Segmentation Garrett Bernstein (gsb29) Karen Ho (ksh33) Advanced Machine Learning: CS 6780 Abstract We tackle the problem of segmenting an image into planes given user input.

More information

3D Models and Matching

3D Models and Matching 3D Models and Matching representations for 3D object models particular matching techniques alignment-based systems appearance-based systems GC model of a screwdriver 1 3D Models Many different representations

More information

Evaluation and comparison of interest points/regions

Evaluation and comparison of interest points/regions Introduction Evaluation and comparison of interest points/regions Quantitative evaluation of interest point/region detectors points / regions at the same relative location and area Repeatability rate :

More information

Obtaining Feature Correspondences

Obtaining Feature Correspondences Obtaining Feature Correspondences Neill Campbell May 9, 2008 A state-of-the-art system for finding objects in images has recently been developed by David Lowe. The algorithm is termed the Scale-Invariant

More information

Local Features Tutorial: Nov. 8, 04

Local Features Tutorial: Nov. 8, 04 Local Features Tutorial: Nov. 8, 04 Local Features Tutorial References: Matlab SIFT tutorial (from course webpage) Lowe, David G. Distinctive Image Features from Scale Invariant Features, International

More information

Matching and Recognition in 3D. Based on slides by Tom Funkhouser and Misha Kazhdan

Matching and Recognition in 3D. Based on slides by Tom Funkhouser and Misha Kazhdan Matching and Recognition in 3D Based on slides by Tom Funkhouser and Misha Kazhdan From 2D to 3D: Some Things Easier No occlusion (but sometimes missing data instead) Segmenting objects often simpler From

More information

Selection of Scale-Invariant Parts for Object Class Recognition

Selection of Scale-Invariant Parts for Object Class Recognition Selection of Scale-Invariant Parts for Object Class Recognition Gy. Dorkó and C. Schmid INRIA Rhône-Alpes, GRAVIR-CNRS 655, av. de l Europe, 3833 Montbonnot, France fdorko,schmidg@inrialpes.fr Abstract

More information

Corner Detection. GV12/3072 Image Processing.

Corner Detection. GV12/3072 Image Processing. Corner Detection 1 Last Week 2 Outline Corners and point features Moravec operator Image structure tensor Harris corner detector Sub-pixel accuracy SUSAN FAST Example descriptor: SIFT 3 Point Features

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4 Advanced Video Content Analysis and Video Compression (5LSH0), Module 4 Visual feature extraction Part I: Color and texture analysis Sveta Zinger Video Coding and Architectures Research group, TU/e ( s.zinger@tue.nl

More information

INVARIANT CORNER DETECTION USING STEERABLE FILTERS AND HARRIS ALGORITHM

INVARIANT CORNER DETECTION USING STEERABLE FILTERS AND HARRIS ALGORITHM INVARIANT CORNER DETECTION USING STEERABLE FILTERS AND HARRIS ALGORITHM ABSTRACT Mahesh 1 and Dr.M.V.Subramanyam 2 1 Research scholar, Department of ECE, MITS, Madanapalle, AP, India vka4mahesh@gmail.com

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

Multiscale 3D Feature Extraction and Matching

Multiscale 3D Feature Extraction and Matching Multiscale 3D Feature Extraction and Matching Hadi Fadaifard and George Wolberg Graduate Center, City University of New York May 18, 2011 Hadi Fadaifard and George Wolberg Multiscale 3D Feature Extraction

More information

Part-based and local feature models for generic object recognition

Part-based and local feature models for generic object recognition Part-based and local feature models for generic object recognition May 28 th, 2015 Yong Jae Lee UC Davis Announcements PS2 grades up on SmartSite PS2 stats: Mean: 80.15 Standard Dev: 22.77 Vote on piazza

More information

C. Premsai 1, Prof. A. Kavya 2 School of Computer Science, School of Computer Science Engineering, Engineering VIT Chennai, VIT Chennai

C. Premsai 1, Prof. A. Kavya 2 School of Computer Science, School of Computer Science Engineering, Engineering VIT Chennai, VIT Chennai Traffic Sign Detection Via Graph-Based Ranking and Segmentation Algorithm C. Premsai 1, Prof. A. Kavya 2 School of Computer Science, School of Computer Science Engineering, Engineering VIT Chennai, VIT

More information

COMPARATIVE STUDY OF IMAGE EDGE DETECTION ALGORITHMS

COMPARATIVE STUDY OF IMAGE EDGE DETECTION ALGORITHMS COMPARATIVE STUDY OF IMAGE EDGE DETECTION ALGORITHMS Shubham Saini 1, Bhavesh Kasliwal 2, Shraey Bhatia 3 1 Student, School of Computing Science and Engineering, Vellore Institute of Technology, India,

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

FOOTPRINTS EXTRACTION

FOOTPRINTS EXTRACTION Building Footprints Extraction of Dense Residential Areas from LiDAR data KyoHyouk Kim and Jie Shan Purdue University School of Civil Engineering 550 Stadium Mall Drive West Lafayette, IN 47907, USA {kim458,

More information

What is Computer Vision?

What is Computer Vision? Perceptual Grouping in Computer Vision Gérard Medioni University of Southern California What is Computer Vision? Computer Vision Attempt to emulate Human Visual System Perceive visual stimuli with cameras

More information

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town Recap: Smoothing with a Gaussian Computer Vision Computer Science Tripos Part II Dr Christopher Town Recall: parameter σ is the scale / width / spread of the Gaussian kernel, and controls the amount of

More information

Prof. Feng Liu. Spring /26/2017

Prof. Feng Liu. Spring /26/2017 Prof. Feng Liu Spring 2017 http://www.cs.pdx.edu/~fliu/courses/cs510/ 04/26/2017 Last Time Re-lighting HDR 2 Today Panorama Overview Feature detection Mid-term project presentation Not real mid-term 6

More information

Face Recognition Based On Granular Computing Approach and Hybrid Spatial Features

Face Recognition Based On Granular Computing Approach and Hybrid Spatial Features Face Recognition Based On Granular Computing Approach and Hybrid Spatial Features S.Sankara vadivu 1, K. Aravind Kumar 2 Final Year Student of M.E, Department of Computer Science and Engineering, Manonmaniam

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Why do we care about matching features? Scale Invariant Feature Transform Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Automatic

More information

A Novel Extreme Point Selection Algorithm in SIFT

A Novel Extreme Point Selection Algorithm in SIFT A Novel Extreme Point Selection Algorithm in SIFT Ding Zuchun School of Electronic and Communication, South China University of Technolog Guangzhou, China zucding@gmail.com Abstract. This paper proposes

More information

Visual Learning and Recognition of 3D Objects from Appearance

Visual Learning and Recognition of 3D Objects from Appearance Visual Learning and Recognition of 3D Objects from Appearance (H. Murase and S. Nayar, "Visual Learning and Recognition of 3D Objects from Appearance", International Journal of Computer Vision, vol. 14,

More information

Lecture 10 Detectors and descriptors

Lecture 10 Detectors and descriptors Lecture 10 Detectors and descriptors Properties of detectors Edge detectors Harris DoG Properties of detectors SIFT Shape context Silvio Savarese Lecture 10-26-Feb-14 From the 3D to 2D & vice versa P =

More information

3D Models and Matching

3D Models and Matching 3D Models and Matching representations for 3D object models particular matching techniques alignment-based systems appearance-based systems GC model of a screwdriver 1 3D Models Many different representations

More information

Shape Modeling and Geometry Processing

Shape Modeling and Geometry Processing 252-0538-00L, Spring 2018 Shape Modeling and Geometry Processing Discrete Differential Geometry Differential Geometry Motivation Formalize geometric properties of shapes Roi Poranne # 2 Differential Geometry

More information

Multi-view stereo. Many slides adapted from S. Seitz

Multi-view stereo. Many slides adapted from S. Seitz Multi-view stereo Many slides adapted from S. Seitz Beyond two-view stereo The third eye can be used for verification Multiple-baseline stereo Pick a reference image, and slide the corresponding window

More information

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1 Last update: May 4, 200 Vision CMSC 42: Chapter 24 CMSC 42: Chapter 24 Outline Perception generally Image formation Early vision 2D D Object recognition CMSC 42: Chapter 24 2 Perception generally Stimulus

More information

CRF Based Point Cloud Segmentation Jonathan Nation

CRF Based Point Cloud Segmentation Jonathan Nation CRF Based Point Cloud Segmentation Jonathan Nation jsnation@stanford.edu 1. INTRODUCTION The goal of the project is to use the recently proposed fully connected conditional random field (CRF) model to

More information

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882 Matching features Building a Panorama Computational Photography, 6.88 Prof. Bill Freeman April 11, 006 Image and shape descriptors: Harris corner detectors and SIFT features. Suggested readings: Mikolajczyk

More information

Curvature Computation on Free-Form 3-D Meshes at Multiple Scales

Curvature Computation on Free-Form 3-D Meshes at Multiple Scales Computer Vision and Image Understanding 83, 118 139 (2001) doi:10.1006/cviu.2001.0919, available online at http://www.idealibrary.com on Curvature Computation on Free-Form 3-D Meshes at Multiple Scales

More information

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. Section 10 - Detectors part II Descriptors Mani Golparvar-Fard Department of Civil and Environmental Engineering 3129D, Newmark Civil Engineering

More information

Using Geometric Blur for Point Correspondence

Using Geometric Blur for Point Correspondence 1 Using Geometric Blur for Point Correspondence Nisarg Vyas Electrical and Computer Engineering Department, Carnegie Mellon University, Pittsburgh, PA Abstract In computer vision applications, point correspondence

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Scale Invariant Feature Transform Why do we care about matching features? Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Image

More information

Computer Vision I - Filtering and Feature detection

Computer Vision I - Filtering and Feature detection Computer Vision I - Filtering and Feature detection Carsten Rother 30/10/2015 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image

More information

EDGE BASED REGION GROWING

EDGE BASED REGION GROWING EDGE BASED REGION GROWING Rupinder Singh, Jarnail Singh Preetkamal Sharma, Sudhir Sharma Abstract Image segmentation is a decomposition of scene into its components. It is a key step in image analysis.

More information

A Novel Algorithm for Color Image matching using Wavelet-SIFT

A Novel Algorithm for Color Image matching using Wavelet-SIFT International Journal of Scientific and Research Publications, Volume 5, Issue 1, January 2015 1 A Novel Algorithm for Color Image matching using Wavelet-SIFT Mupuri Prasanth Babu *, P. Ravi Shankar **

More information

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Jing Wang and Kristin J. Dana Electrical and Computer Engineering Department Rutgers University Piscataway, NJ, USA {jingwang,kdana}@caip.rutgers.edu

More information

Local Features: Detection, Description & Matching

Local Features: Detection, Description & Matching Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British

More information

Lecture 7: Most Common Edge Detectors

Lecture 7: Most Common Edge Detectors #1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the

More information

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING DS7201 ADVANCED DIGITAL IMAGE PROCESSING II M.E (C.S) QUESTION BANK UNIT I 1. Write the differences between photopic and scotopic vision? 2. What

More information

Pictures at an Exhibition

Pictures at an Exhibition Pictures at an Exhibition Han-I Su Department of Electrical Engineering Stanford University, CA, 94305 Abstract We employ an image identification algorithm for interactive museum guide with pictures taken

More information

An Efficient Single Chord-based Accumulation Technique (SCA) to Detect More Reliable Corners

An Efficient Single Chord-based Accumulation Technique (SCA) to Detect More Reliable Corners An Efficient Single Chord-based Accumulation Technique (SCA) to Detect More Reliable Corners Mohammad Asiful Hossain, Abdul Kawsar Tushar, and Shofiullah Babor Computer Science and Engineering Department,

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection

More information

Image Analysis Lecture Segmentation. Idar Dyrdal

Image Analysis Lecture Segmentation. Idar Dyrdal Image Analysis Lecture 9.1 - Segmentation Idar Dyrdal Segmentation Image segmentation is the process of partitioning a digital image into multiple parts The goal is to divide the image into meaningful

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information