Growing Depth Image Superpixels for Foliage Modeling

Size: px
Start display at page:

Download "Growing Depth Image Superpixels for Foliage Modeling"

Transcription

1 z Growing Depth Image Superpixels for Foliage Modeling Daniel Morris, Saif Imran Dept. of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48824, USA Jin Chen, David M. Kramer MSU-DOE Plant Research Lab, Michigan State University, East Lansing, MI 48824, USA Abstract This paper presents a method for segmenting depth images into superpixels without requiring color images. Typically superpixel methods cluster pixels based on proximity in a multidimensional color space. However, building superpixels from time-of-flight depth images poses a number of new challenges including: pixels do not have color channels for similarity comparisons, the resolution of depth cameras is low compared to color, and there is significant depth noise. To address these we propose a superpixel method that approximates a depth image with set of planar facets. Facets are grown from seed points to cover the scene. Facet boundaries tend to coincide with high curvature regions and depth discontinuities, typically giving an over-segmentation of the scene. This work is motivated by automated foliage modeling, and the data we consider are of dense 3D foliage. Superpixel results are shown on foliage and are quantified using labeled data. Keywords-Superpixels; depth image; segmentation; energy minimization; RGB-D; time-of-flight camera I. INTRODUCTION Superpixels have significant utility in image analysis. With greater information content than individual pixels, they allow for richer low-level descriptors, and at the same time there are far fewer of them than raw pixels, which reduces the dimensionality of graphs and optimization problems built on them. As a result they have been used for numerous computer vision tasks including segmentation [1], [2], [3], object detection [4], object tracking [5], and stereo depth estimation [6]. The key property that makes them useful for these tasks is that they over-segment the image, and boundaries of objects are included in the boundaries between superpixels. This property means superpixels do not need to be split and can be treated as low-level building blocks for scene and object analysis. The recent availability of consumer depth cameras, such as time-of-flight sensors [7], has provided an important new modality for sensing the world. These depth images can be used for many of the same computer vision tasks as color imagery including segmentation, object detection, object tracking, etc., and similar problems are faced in achieving these tasks. In the application that motivates this paper, a time-of-flight sensor is placed over plants in a growth chamber and leaf growth over a number of days is recorded. A key need is to automatically segment the leaves and leaf fragments within dense foliage. While there are (c) Figure 1. Depth image of a plant generated by a time-of-flight camera. 0 Near-IR reflectivity measured-200 for each depth pixel. (c) Color image of100 the plant from adjacent camera. (d) Depth -100 image pixels 0 from 100 projected into 3D illustrating significant noise and edge artifacts. x methods for 3D modeling of leaves including [8], [9], [10], [11], automatic segmentation of them from depth imagery has not been solved, and these methods rely on color or intensity images for segmentation, in some cases leveraging superpixels of these [10]. Single-image color segmentation of leafs works well [12] with plane backgrounds, but can fail to segment uniformly-colored, overlapping leaves. While ideally a RGB-D superpixel method such as [13] could use both color and depth, in practice the parallax for closein plants can be a large fraction of leaf features making this impractical. The alternative is to do superpixel in both color and depth and combine the results afterwards. The missing component is a depth-based superpixel method, and achieving that is the goal of this paper. Future work will address combining superpixels from multiple modalities. In an ideal superpixel segmentation, the true object boundaries are included in the boundaries of the superpixels. Poor performance occurs when superpixels span the boundaries of (d) -100 y

2 Figure 2. An illustration of the failures in two of the best color-based superpixel methods when used on overlapping foliage in Fig. 1(c). A close-up of SLIC superpixels [15] with boundaries shown in orange. The manually identified outline of the leaves is shown in magenta. The SLIC pixel boundaries align well with the boundary between the leaves and the background. However the superpixel boundaries do not overlap the boundaries between overlapping leaves well. The graph cut superpixel method in [14] is used with a very low threshold to obtain this set of superpixels. More of the leaf boundaries are obtained, but nevertheless a number of true overlapping leaf boundaries (shown in magenta) are not obtained and these are merged within single superpixels. objects. Proximity in color space, as used by most superpixel methods [14], [15], [16], is a useful cue for segmentation since object boundaries frequently involve sharp changes in color space. Strategies for leveraging color proximity cues can lead to high scores in boundary recall [17]. On the other hand, proximity in depth can provide easy segmentation for well-separated objects, but is problematic when objects are closely spaced or are touching. Overlapping leaves are often closely spaced resulting in small depth separation cues between leaves. Thus, rather than use proximity in depth, this paper presents a depth image superpixel method based on the idea of superpixels being the 2D projection of a planar-patch approximation of the depth image. A patch model is different than a mesh model [18] as we do not require that the boundaries of facets connect. In part this is because superpixels have complicated shapes, but more importantly it is because edge discontinuities in a scene are very often not connected spatially, and connecting them with a mesh leads to blurring of boundaries. Sensor depth noise is an important issue as the depth separation of leaves can be close to the sensor noise level. Our method seeks a robust solution for approximating noisy depth surface patches while avoiding spanning boundaries between objects. II. RELATED WORK Superpixel methods can be broadly divided into graphbased methods [19], [14], [20], [21], [22] and hill-climbing methods [23], [24], [25], [15], [16]. Graph methods optimize an energy function over edge cuts that includes on pixel color similarity and shape and size priors. The more recent hill climbing methods such as SLIC [15] and SEEDS [16] start with a regular grid and refine these leading to fast performance with fairly evenly sized superpixels. Our method follows a similar approach of starting with a regular grid and refining it. However since depth pixels have no color or texture, our underlying local similarity model is quite different from all the color-based methods. With the availability of RGB-D datasets with a depth pixel for each color pixel, it is natural to do joint color and depthbased superpixel as in [13]. Both color and depth cues can be leveraged potentially achieving better object segmentation. With current sensors the baseline separating the depth and color cameras causes significant parallax and occlusion differences between color and depth cameras, particularly for complicated structures such as foliage. Obtaining a oneto-one mapping of color and depth pixels entails significant feature matching and object modeling often from multiple viewpoints as in [26]. In our case, constrained to viewing the scene from a single viewpoint, it is preferable to use the color and depth images separately, as in [27] where the color image is used to upsample the depth image with an enhanced bilateral filter. Unfortunately it is in regions of detailed structure and occlusions, as exhibited by overlapping leaves, that significant depth errors are reported [27]. In our work we focus on using superpixels to resolve depth discontinuities in the depth image alone, and leave it to future work to improve this with an adjacent color image. Superpixel can be used as a first step in 3D model building. An alternative is to collect data from a moving sensor, align and fuse it as done with Kinect Fusion using RGB- D images [28], [29], [26]. These methods involve building a voxel model of the scene and extracting an isosurface. However, in our application the camera remains stationary making these approaches not suitable, and that motivates this work to focus on the single-viewpoint segmentation problem. III. SENSOR DATA Our data was collected using a Creative Senz3D sensor. Like the Kinect 2, this is a time-of-flight sensor [7] but with a lower resolution of pixels. Despite the lower resolution, its smaller angular field of view along with closer minimum range of roughly 20cm, compared to 50cm for the Kinect 2, means that it can collect higher resolution depth images. This is important for modeling plants and foliage with fine structure. We used the raw depth values rather than the default smoothed values as these contain more details of the target and return values at greater depth. Sample data from this sensor is shown in Fig. 1. The depth camera returns both a dense pixel depth image and reflected intensity for each pixel although only the depth image was used here. In addition there is a color camera offset roughly 2.5cm from the depth camera. This offset creates in large parallax for close-in objects and significant occlusion differences.

3 IV. METHOD Unlike color images, depth-image pixels have no color or texture characteristic with which to group them. Instead we propose that depth-image superpixels should model the local geometry as a planar facets. Pixels on flat regions should be grouped together into segments. On the other hand, high curvature edges and depth discontinuities demark the boundaries of objects and so should lie on the boundaries of superpixels. A depth-image superpixel segmentation will thus be a faceted surface model. In building a superpixel representation we seek the following four characteristics. First, superpixel boundaries should conform to boundaries of 3D objects in the scene. That is, while the superpixels will subdivide objects, they should not span multiple objects. Second, superpixels should be contiguous in the image plane. Third, as argued in [15], it is desirable that superpixels be roughly uniform in size as they will contain similar information content for use in higher level algorithms for scene analysis. This constant size could be in 3D scene space as in [13], although the utility of this depends on the subsequent processing, and here we seek a constant image-space dimension for superpixels. Fourth, since depth-image superpixels provide a planar faceted approximation of the scene, the difference between these planar facets and the measured depths should be minimized. This section describes our approach to achieving these goals. A. Facet Estimation Our method starts using a regular grid to divide the depth image into equal-sized square cells. The cell size should be on the order of the smallest features that we wish to approximate, which in our application is the size of the smallest leaves. The significant challenge we face in building our faceted model is how to avoid crossing depth discontinuities when the depth change across them may be on the order of the sensor noise. Our approach is to let our facet model the largest contiguous planar region in each grid cell. If there are two surfaces spanned by a cell, then the facet should approximate just one. It is important that the facet is contiguous, because simply finding the best planar fit to the points in a cell has a high chance of finding a plane that crosses the discontinuity, as illustrated in Fig. 3. Hence we fit planes to pixel depths in a grid cell using a robust method that grows contiguous regions explained as follows. First we choose a set of seed points to start the plane fitting. For each seed point we want a depth and two slopes to define a plane. The slope will be noisy hence we sample these seed points from a smoothed depth image. Not all points on the smoothed depth image are likely to a plane. In particular the slopes at depth discontinuities are unlikely to result in a good planar fit. To avoid these cases we exclude pixels corresponding to peaks in the gradient images and Figure 3. A slice illustration of the challenge in fitting facets to data. The red data points (one per pixel) are the same in and. In a robust line is fit to the data, but the true surface is really a step function closer to. Our method seeks to grow facets that do not span these boundaries. also where the smoothed depth differs more than a threshold from the raw depth image. Next we need to estimate and evaluate the planar region defined by each seed point. There are two main considerations in evaluating this planar region. The first, H(s), for a set of pixels s, is a likelihood function of the plane given the data, which we seek to maximize in estimating the planar parameters. The second consideration is a prior on the shape or distribution of pixels belonging a planar region, which we refer to as a penalty term S(s). For instance this can capture a prior that planar regions should be compact. Finding a planar region involves hill climbing an energy function made from these two components: E(s) = H(s) αs(s). (1) In our implementation we chose the following instantiation for this equation. Our likelihood measure for a planar region was simply, N(s), the number of contiguous pixels whose depth points fall within a tight threshold around the plane. Our shape-based penalty term, S(s), was set equal to the number of boundary edges of the pixels on the plane. For a given number of pixels, a compact region will have fewer boundary edges than a region with narrow arms and so will be penalized less. Both measures are easily calculated. The weight, α, selects how compact the regions will be. What still needs to be described is how we obtain a contiguous set of pixels belonging to a seed point. The seed specifies the plane (with its depth and slopes), and so all points within a grid cell can be evaluated by their perpendicular distance to this plane. A mask is created for all pixels belonging to this plane. Then starting at the seed point a dilation operator is repeatedly used to expand the seed point, and at each step the dilated pixels are constrained to lie on the mask as well as not cross any gradient peaks above a threshold, t g, or to diverge by more than t p in depth from the depth pixels. After a fixed number of iterations the dilation stops and the set of pixels reached from within the mask are the points on the plane. The energy of this planar facet is evaluated with Eq. (1). By repeating this procedure for a randomly chosen set of good seed points, the pixels belonging to the highest scoring plane are chosen for that

4 grid cell, and plane is fit to them using least squares to represent the superpixel. B. Facet Expansion Up to now we have each grid cell modeled as a single facet covering a subset of the pixels. For instance when there is a discontinuity across a grid cell, the facet will ideally cover the region to one side of the discontinuity. We would like a way to explain the remainder of the pixels in each facet. Now assuming that the structures in the have dimension at least the size of the superpixel, then the planar region not covered is likely to extend out over adjacent superpixels. If this plane is modeled in one of these adjacent superpixels we can simply extend that plane to cover the uncovered pixels of the current superpixel. The expansion algorithm works as follows. We apply the contiguous planar facet fitting algorithm described in Section IV-A with the following modifications. We do not need to search for a seed as we start with the fitted plane for that superpixel. We increase the region in which we grow the plane from a single grid cell to a 3 3-cell region centered at that cell. Our mask is set to those pixels in the new cell region lying within an expanded distance threshold of 1.6t p from the surface, and that do not belong to any of the other facets. In this way unclaimed pixels from adjacent grid cells are added to a superpixel if they fall on the plane and are contiguous. This procedure is applied to all of the superpixels in a random order. The final result is a set of superpixels modeling the depth image as a set of planar facets. There remain pixels not claimed by any superpixel. These are primarily of two types neither of which is well approximated by a plane. At depth discontinuities the depth pixels may have mixed values between a surface and its occluding surface with high variance. Also pixels at long range or with low intensity have large noise and do not pass the thresholds we use for fitting planes. We could adjust the threshold based on the pixel noise, but for our application these pixels are not useful and excluding them from the superpixels does not impact the foliage surface estimation. V. RESULTS To test our method we labeled individual leaves on depth images collected from six plants, three of which are shown in Fig. 4 with their labeling. The leaves are densely packed presenting a challenge to segmentation and superpixel algorithms. A goal of superpixel models is that the superpixels boundaries include the leaf boundaries. We evaluate the results based on two stamdard evaluation criteria [16] for superpixel methods: the Corrected Undersegmented Error (CUE) and the Boundary Recall (BR) score. To calculate CUE we first assign each superpixel to one ground truth segment or the background based on its maximum overlap. The error for a superpixel is the sum of pixels falling outside its assigned segment. Then the CUE is the sum of the errors for all the superpixels divided by the sum of the ground truth pixels. The denominator intentionally excludes the background region since adding more background should not affect the CUE. Boundary Recall, on the other hand, means the fraction of boundary pixels that can be traced by the superpixels. Here it is the fraction of ground truth boundary pixels that are within a 1.5 pixel distance to an edge of a superpixel. We set the following parameters for all instances of our algorithm: α = 0.5 from Eq. (1), gradient peak threshold: t g = 5mm, and plane error threshold, t p = 5mm. Based on the ground-truth, the results of our algorithm are compared with two other methods: one baseline method and the other is SLIC [15]. The baseline method is based on splitting the image into regular grid where each grid is a superpixel. SLIC is modified by replacing the LAB color-space image with the depth image. Our method outperforms the other methods both in terms of CUE and BR scores in all the six canopies tested. For subjective evaluation, we plot the facet superpixel boundaries on the plant canopies as shown in Fig. 5. It shows the superpixel boundaries overlapping quite well the boundaries of the depth images. VI. CONCLUSION We propose a new superpixel approach that operates purely on depth images. It approximates depths images as collections of planar facets that correspond to superpixels. It is simple to implement, generates a mostly regular grid of superpixels that approximate the depth image as a set of planar facets, and achieves good fidelity to object boundaries. Our results show improvements over a baseline method for fidelity to ground truth. They also show practicality for using depth-image superpixels for foliage analysis. As far as we are aware this is the first depth-only method for superpixel segmentation. This opens the door to applications where color is not available or where color provides poor segmentation cues as in some of the foliage data. Since the depth superpixels are do not depend on color superpixels, they could be fused with a color-based approach; a future direction for our work ACKNOWLEDGMENT This research was supported by an MSU start-up grant, the U.S. Department of Energy, Office of Science, Basic Energy Sciences [award number DE-FG02-91ER20021], the National Science Foundation [award number ] and the MSU Center for Advanced Algal and Plant Phenotyping.

5 (c) (e) (f ) (d) Figure 5. Superpixel boundaries corresponding to Fig. 4 are plotted on depth images in, (c) and (e). For easier visual inspection we plot the same superpixels on the IR reflectance images in,(d) and (f ). (c) Figure 4. Ground-truth labeled segments for three of the six plants: Plant A, Plant B also shown in Fig. 1, and (c) Plant C. R EFERENCES [1] B. Fulkerson, A. Vedaldi, and S. Soatto, Class segmentation and object localization with superpixel neighborhoods, in Computer Vision, 2009 IEEE 12th International Conference on, Sept 2009, pp [2] A. P. Moore, S. J. D. Prince, J. Warrell, U. Mohammed, and G. Jones, Scene shape priors for superpixel segmentation, in Computer Vision, 2009 IEEE 12th International Conference on, Sept 2009, pp [5] F. Yang, H. Lu, and M.-H. Yang, Robust superpixel tracking, Image Processing, IEEE Transactions on, vol. 23, no. 4, pp , April [6] C. L. Zitnick and S. B. Kang, Stereo for image-based rendering using image over-segmentation, International Journal of Computer Vision, vol. 75, no. 1, pp , [7] M. Hansard, S. Lee, O. Choi, and R. Horaud, Time-of-Flight Cameras: Principles, Methods and Applications. New York, NY: Springer, [8] L. Quan, P. Tan, G. Zeng, L. Yuan, J. Wang, and S. B. Kang, Image-based plant modeling, in ACM SIGGRAPH 2006 Papers, ser. SIGGRAPH 06. New York, NY, USA: ACM, 2006, pp [3] J. Lu, R. Xu, and J. J. Corso, Human action segmentation with hierarchical supervoxel consistency, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, [9] G. Alenya, B. Dellen, and C. Torras, 3d modelling of leaves from color and tof data for robotized plant measuring, in Robotics and Automation (ICRA), 2011 IEEE International Conference on, May 2011, pp [4] Y. Yang, S. Hallman, D. Ramanan, and C. Fowlkes, Layered object models for image segmentation, Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 34, no. 9, pp , Sept [10] G. Alenya, B. Dellen, S. Foix, and C. Torras, Robotized plant probing: Leaf segmentation utilizing time-of-flight data, Robotics Automation Magazine, IEEE, vol. 20, no. 3, pp , Sept 2013.

6 BR Scores CUE Metric Baseline Algorithm SLIC FacetSuperPixel [18] H. Hoppe, T. DeRose, T. Duchamp, M. Halstead, H. Jin, J. McDonald, J. Schweitzer, and W. Stuetzle, Piecewise smooth surface reconstruction, in Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, ser. SIGGRAPH 94. New York, NY, USA: ACM, 1994, pp [19] J. Shi and J. Malik, Normalized cuts and image segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp , Aug Plant A Plant B Plant C Plant D Plant E Plant F Plant Canopy Baseline Algorithm SLIC FacetSuperpixel Plant A Plant B Plant C Plant D Plant E Plant F Plant Canopy Figure 6. CUE metric and BR score for six plants. Plants A, B and C are shown in Figs. 4 and 5. [11] C. Xia, J.-M. Lee, Y. Li, Y.-H. Song, B.-K. Chung, and T.- S. Chon, Plant leaf detection using modified active shape models, Biosystems Engineering, vol. 116, no. 1, pp , [12] N. Kumar, P. N. Belhumeur, A. Biswas, D. W. Jacobs, W. J. Kress, I. C. Lopez, and J. V. Soares, Leafsnap: A computer vision system for automatic plant species identification, in Computer Vision ECCV Springer, 2012, pp [13] D. Weikersdorfer, D. Gossow, and M. Beetz, Depth-adaptive superpixels, in Pattern Recognition (ICPR), st International Conference on, Nov 2012, pp [14] P. F. Felzenszwalb and D. P. Huttenlocher, Efficient graphbased image segmentation, International Journal of Computer Vision, vol. 59, no. 2, pp , [15] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Susstrunk, Slic superpixels compared to state-of-the-art superpixel methods, Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 34, no. 11, pp , Nov [16] M. Bergh, X. Boix, G. Roig, and L. Gool, Seeds: Superpixels extracted via energy-driven sampling, International Journal of Computer Vision, vol. 111, no. 3, pp , [17] D. Stutz, Superpixel segmentation: An evaluation, in Pattern Recognition, ser. Lecture Notes in Computer Science, J. Gall, P. Gehler, and B. Leibe, Eds. Springer International Publishing, 2015, vol. 9358, pp [20] A. P. Moore, S. J. D. Prince, J. Warrell, U. Mohammed, and G. Jones, Superpixel lattices, in Computer Vision and Pattern Recognition, CVPR IEEE Conference on, June 2008, pp [21] O. Veksler, Y. Boykov, and P. Mehrani, Computer Vision ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, September 5-11, 2010, Proceedings, Part V. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010, ch. Superpixels and Supervoxels in an Energy Optimization Framework, pp [22] M. Y. Liu, O. Tuzel, S. Ramalingam, and R. Chellappa, Entropy rate superpixel segmentation, in Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, June 2011, pp [23] D. Comaniciu and P. Meer, Mean shift: a robust approach toward feature space analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp , May [24] A. Vedaldi and S. Soatto, Computer Vision ECCV 2008: 10th European Conference on Computer Vision, Marseille, France, October 12-18, 2008, Proceedings, Part IV. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, ch. Quick Shift and Kernel Methods for Mode Seeking, pp [25] A. Levinshtein, A. Stere, K. N. Kutulakos, D. J. Fleet, S. J. Dickinson, and K. Siddiqi, Turbopixels: Fast superpixels using geometric flows, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 12, pp , Dec [26] P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, Rgbd mapping: Using kinect-style depth cameras for dense 3d modeling of indoor environments, The International Journal of Robotics Research, vol. 31, no. 5, pp , [27] J. Park, H. Kim, Y. W. Tai, M. S. Brown, and I. S. Kweon, High-quality depth map upsampling and completion for rgbd cameras, IEEE Transactions on Image Processing, vol. 23, no. 12, pp , Dec [28] S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotton, S. Hodges, D. Freeman, A. Davison, and A. Fitzgibbon, Kinectfusion: real-time 3d reconstruction and interaction using a moving depth camera, in Proc. of ACM UIST, 2011, pp [29] R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohli, J. Shotton, S. Hodges, and A. W. Fitzgibbon, Kinectfusion: Real-time dense surface mapping and tracking, in Proc. of IEEE ISMAR, 2011, pp

Superpixel Segmentation using Depth

Superpixel Segmentation using Depth Superpixel Segmentation using Depth Information Superpixel Segmentation using Depth Information David Stutz June 25th, 2014 David Stutz June 25th, 2014 01 Introduction - Table of Contents 1 Introduction

More information

Content-based Image and Video Retrieval. Image Segmentation

Content-based Image and Video Retrieval. Image Segmentation Content-based Image and Video Retrieval Vorlesung, SS 2011 Image Segmentation 2.5.2011 / 9.5.2011 Image Segmentation One of the key problem in computer vision Identification of homogenous region in the

More information

DEPTH-ADAPTIVE SUPERVOXELS FOR RGB-D VIDEO SEGMENTATION. Alexander Schick. Fraunhofer IOSB Karlsruhe

DEPTH-ADAPTIVE SUPERVOXELS FOR RGB-D VIDEO SEGMENTATION. Alexander Schick. Fraunhofer IOSB Karlsruhe DEPTH-ADAPTIVE SUPERVOXELS FOR RGB-D VIDEO SEGMENTATION David Weikersdorfer Neuroscientific System Theory Technische Universität München Alexander Schick Fraunhofer IOSB Karlsruhe Daniel Cremers Computer

More information

arxiv: v1 [cs.cv] 14 Sep 2015

arxiv: v1 [cs.cv] 14 Sep 2015 gslicr: SLIC superpixels at over 250Hz Carl Yuheng Ren carl@robots.ox.ac.uk University of Oxford Ian D Reid ian.reid@adelaide.edu.au University of Adelaide September 15, 2015 Victor Adrian Prisacariu victor@robots.ox.ac.uk

More information

Voxel Cloud Connectivity Segmentation - Supervoxels for Point Clouds

Voxel Cloud Connectivity Segmentation - Supervoxels for Point Clouds 2013 IEEE Conference on Computer Vision and Pattern Recognition Voxel Cloud Connectivity Segmentation - Supervoxels for Point Clouds Jeremie Papon Alexey Abramov Markus Schoeler Florentin Wörgötter Bernstein

More information

Resolution-independent superpixels based on convex constrained meshes without small angles

Resolution-independent superpixels based on convex constrained meshes without small angles 1 2 Resolution-independent superpixels based on convex constrained meshes without small angles 3 4 5 6 7 Jeremy Forsythe 1,2, Vitaliy Kurlin 3, Andrew Fitzgibbon 4 1 Vienna University of Technology, Favoritenstr.

More information

SCALP: Superpixels with Contour Adherence using Linear Path

SCALP: Superpixels with Contour Adherence using Linear Path SCALP: Superpixels with Contour Adherence using Linear Path Rémi Giraud 1,2 remi.giraud@labri.fr with Vinh-Thong Ta 1 and Nicolas Papadakis 2 1 LaBRI CNRS UMR 5800 - University of Bordeaux, FRANCE PICTURA

More information

Depth SEEDS: Recovering Incomplete Depth Data using Superpixels

Depth SEEDS: Recovering Incomplete Depth Data using Superpixels Depth SEEDS: Recovering Incomplete Depth Data using Superpixels Michael Van den Bergh 1 1 ETH Zurich Zurich, Switzerland vamichae@vision.ee.ethz.ch Daniel Carton 2 2 TU München Munich, Germany carton@lsr.ei.tum.de

More information

Superpixels via Pseudo-Boolean Optimization

Superpixels via Pseudo-Boolean Optimization Superpixels via Pseudo-Boolean Optimization Yuhang Zhang, Richard Hartley The Australian National University {yuhang.zhang, richard.hartley}@anu.edu.au John Mashford, Stewart Burn CSIRO {john.mashford,

More information

Superpixel Segmentation using Linear Spectral Clustering

Superpixel Segmentation using Linear Spectral Clustering Superpixel Segmentation using Linear Spectral Clustering Zhengqin Li Jiansheng Chen Department of Electronic Engineering, Tsinghua University, Beijing, China li-zq12@mails.tsinghua.edu.cn jschenthu@mail.tsinghua.edu.cn

More information

Generating Object Candidates from RGB-D Images and Point Clouds

Generating Object Candidates from RGB-D Images and Point Clouds Generating Object Candidates from RGB-D Images and Point Clouds Helge Wrede 11.05.2017 1 / 36 Outline Introduction Methods Overview The Data RGB-D Images Point Clouds Microsoft Kinect Generating Object

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Robot localization method based on visual features and their geometric relationship

Robot localization method based on visual features and their geometric relationship , pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department

More information

Graph based Image Segmentation using improved SLIC Superpixel algorithm

Graph based Image Segmentation using improved SLIC Superpixel algorithm Graph based Image Segmentation using improved SLIC Superpixel algorithm Prasanna Regmi 1, B.J.M. Ravi Kumar 2 1 Computer Science and Systems Engineering, Andhra University, AP, India 2 Computer Science

More information

SCALP: Superpixels with Contour Adherence using Linear Path

SCALP: Superpixels with Contour Adherence using Linear Path : Superpixels with Contour Adherence using Linear Path Rémi Giraud, Vinh-Thong Ta, Nicolas Papadakis To cite this version: Rémi Giraud, Vinh-Thong Ta, Nicolas Papadakis. : Superpixels with Contour Adherence

More information

Superpixel Segmentation Based Gradient Maps on RGB-D Dataset

Superpixel Segmentation Based Gradient Maps on RGB-D Dataset Superpixel Segmentation Based Gradient Maps on RGB-D Dataset Lixing Jiang, Huimin Lu, Vo Duc My, Artur Koch and Andreas Zell Abstract Superpixels aim to group homogenous pixels by a series of characteristics

More information

Image Segmentation. Lecture14: Image Segmentation. Sample Segmentation Results. Use of Image Segmentation

Image Segmentation. Lecture14: Image Segmentation. Sample Segmentation Results. Use of Image Segmentation Image Segmentation CSED441:Introduction to Computer Vision (2015S) Lecture14: Image Segmentation What is image segmentation? Process of partitioning an image into multiple homogeneous segments Process

More information

SMURFS: Superpixels from Multi-scale Refinement of Super-regions

SMURFS: Superpixels from Multi-scale Refinement of Super-regions LUENGO, BASHAM, FRENCH: SMURFS SUPERPIXELS 1 SMURFS: Superpixels from Multi-scale Refinement of Super-regions Imanol Luengo 1 imanol.luengo@nottingham.ac.uk Mark Basham 2 mark.basham@diamond.ac.uk Andrew

More information

Cluster Sensing Superpixel and Grouping

Cluster Sensing Superpixel and Grouping Cluster Sensing Superpixel and Grouping Rui Li Lu Fang Abstract Superpixel algorithms have shown significant potential in computer vision applications since they can be used to accelerate other computationally

More information

Robotics Programming Laboratory

Robotics Programming Laboratory Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car

More information

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview Human Body Recognition and Tracking: How the Kinect Works Kinect RGB-D Camera Microsoft Kinect (Nov. 2010) Color video camera + laser-projected IR dot pattern + IR camera $120 (April 2012) Kinect 1.5 due

More information

This is an author-deposited version published in : Eprints ID : 15223

This is an author-deposited version published in :   Eprints ID : 15223 Open Archive TOULOUSE Archive Ouverte (OATAO) OATAO is an open access repository that collects the work of Toulouse researchers and makes it freely available over the web where possible. This is an author-deposited

More information

Mobile Point Fusion. Real-time 3d surface reconstruction out of depth images on a mobile platform

Mobile Point Fusion. Real-time 3d surface reconstruction out of depth images on a mobile platform Mobile Point Fusion Real-time 3d surface reconstruction out of depth images on a mobile platform Aaron Wetzler Presenting: Daniel Ben-Hoda Supervisors: Prof. Ron Kimmel Gal Kamar Yaron Honen Supported

More information

Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting

Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting R. Maier 1,2, K. Kim 1, D. Cremers 2, J. Kautz 1, M. Nießner 2,3 Fusion Ours 1

More information

SUPERPIXELS: THE END OF PIXELS IN OBIA. A COMPARISON OF STATE-OF-THE- ART SUPERPIXEL METHODS FOR REMOTE SENSING DATA

SUPERPIXELS: THE END OF PIXELS IN OBIA. A COMPARISON OF STATE-OF-THE- ART SUPERPIXEL METHODS FOR REMOTE SENSING DATA SUPERPIXELS: THE END OF PIXELS IN OBIA. A COMPARISON OF STATE-OF-THE- ART SUPERPIXEL METHODS FOR REMOTE SENSING DATA O. Csillik * Department of Geoinformatics Z_GIS, University of Salzburg, 5020, Salzburg,

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Superpixel Segmentation: An Evaluation

Superpixel Segmentation: An Evaluation Superpixel Segmentation: An Evaluation David Stutz Computer Vision Group, RWTH Aachen University david.stutz@rwth-aachen.de Abstract. In recent years, superpixel algorithms have become a standard tool

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

CRF Based Point Cloud Segmentation Jonathan Nation

CRF Based Point Cloud Segmentation Jonathan Nation CRF Based Point Cloud Segmentation Jonathan Nation jsnation@stanford.edu 1. INTRODUCTION The goal of the project is to use the recently proposed fully connected conditional random field (CRF) model to

More information

Using the Kolmogorov-Smirnov Test for Image Segmentation

Using the Kolmogorov-Smirnov Test for Image Segmentation Using the Kolmogorov-Smirnov Test for Image Segmentation Yong Jae Lee CS395T Computational Statistics Final Project Report May 6th, 2009 I. INTRODUCTION Image segmentation is a fundamental task in computer

More information

Lecture 16 Segmentation and Scene understanding

Lecture 16 Segmentation and Scene understanding Lecture 16 Segmentation and Scene understanding Introduction! Mean-shift! Graph-based segmentation! Top-down segmentation! Silvio Savarese Lecture 15 -! 3-Mar-14 Segmentation Silvio Savarese Lecture 15

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Image Segmentation. Shengnan Wang

Image Segmentation. Shengnan Wang Image Segmentation Shengnan Wang shengnan@cs.wisc.edu Contents I. Introduction to Segmentation II. Mean Shift Theory 1. What is Mean Shift? 2. Density Estimation Methods 3. Deriving the Mean Shift 4. Mean

More information

Geometric Reconstruction Dense reconstruction of scene geometry

Geometric Reconstruction Dense reconstruction of scene geometry Lecture 5. Dense Reconstruction and Tracking with Real-Time Applications Part 2: Geometric Reconstruction Dr Richard Newcombe and Dr Steven Lovegrove Slide content developed from: [Newcombe, Dense Visual

More information

Segmentation in electron microscopy images

Segmentation in electron microscopy images Segmentation in electron microscopy images Aurelien Lucchi, Kevin Smith, Yunpeng Li Bohumil Maco, Graham Knott, Pascal Fua. http://cvlab.epfl.ch/research/medical/neurons/ Outline Automated Approach to

More information

Compact Watershed and Preemptive SLIC: On improving trade-offs of superpixel segmentation algorithms

Compact Watershed and Preemptive SLIC: On improving trade-offs of superpixel segmentation algorithms To appear in Proc. of International Conference on Pattern Recognition (ICPR), 2014. DOI: not yet available c 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained

More information

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images MECATRONICS - REM 2016 June 15-17, 2016 High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images Shinta Nozaki and Masashi Kimura School of Science and Engineering

More information

Colored Point Cloud Registration Revisited Supplementary Material

Colored Point Cloud Registration Revisited Supplementary Material Colored Point Cloud Registration Revisited Supplementary Material Jaesik Park Qian-Yi Zhou Vladlen Koltun Intel Labs A. RGB-D Image Alignment Section introduced a joint photometric and geometric objective

More information

Kinect Shadow Detection and Classification

Kinect Shadow Detection and Classification 2013 IEEE International Conference on Computer Vision Workshops Kinect Shadow Detection and Classification Teng Deng 1, Hui Li 1, Jianfei Cai 1, Tat-Jen Cham 1, Henry Fuchs 2 1 Nanyang Technological University,

More information

Local Image Registration: An Adaptive Filtering Framework

Local Image Registration: An Adaptive Filtering Framework Local Image Registration: An Adaptive Filtering Framework Gulcin Caner a,a.murattekalp a,b, Gaurav Sharma a and Wendi Heinzelman a a Electrical and Computer Engineering Dept.,University of Rochester, Rochester,

More information

Edge-Based Split-and-Merge Superpixel Segmentation

Edge-Based Split-and-Merge Superpixel Segmentation Proceeding of the 2015 IEEE International Conference on Information and Automation Lijing, China, August 2015 Edge-Based Split-and-Merge Superpixel Segmentation Li Li, Jian Yao, Jinge Tu, Xiaohu Lu, Kai

More information

Semantic Context Forests for Learning- Based Knee Cartilage Segmentation in 3D MR Images

Semantic Context Forests for Learning- Based Knee Cartilage Segmentation in 3D MR Images Semantic Context Forests for Learning- Based Knee Cartilage Segmentation in 3D MR Images MICCAI 2013: Workshop on Medical Computer Vision Authors: Quan Wang, Dijia Wu, Le Lu, Meizhu Liu, Kim L. Boyer,

More information

Towards Spatio-Temporally Consistent Semantic Mapping

Towards Spatio-Temporally Consistent Semantic Mapping Towards Spatio-Temporally Consistent Semantic Mapping Zhe Zhao, Xiaoping Chen University of Science and Technology of China, zhaozhe@mail.ustc.edu.cn,xpchen@ustc.edu.cn Abstract. Intelligent robots require

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

CS 534: Computer Vision Segmentation and Perceptual Grouping

CS 534: Computer Vision Segmentation and Perceptual Grouping CS 534: Computer Vision Segmentation and Perceptual Grouping Ahmed Elgammal Dept of Computer Science CS 534 Segmentation - 1 Outlines Mid-level vision What is segmentation Perceptual Grouping Segmentation

More information

Object Tracking using Superpixel Confidence Map in Centroid Shifting Method

Object Tracking using Superpixel Confidence Map in Centroid Shifting Method Indian Journal of Science and Technology, Vol 9(35), DOI: 10.17485/ijst/2016/v9i35/101783, September 2016 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 Object Tracking using Superpixel Confidence

More information

Accelerated gslic for Superpixel Generation used in Object Segmentation

Accelerated gslic for Superpixel Generation used in Object Segmentation Accelerated gslic for Superpixel Generation used in Object Segmentation Robert Birkus Supervised by: Ing. Wanda Benesova, PhD. Institute of Applied Informatics Faculty of Informatics and Information Technologies

More information

Segmentation & Clustering

Segmentation & Clustering EECS 442 Computer vision Segmentation & Clustering Segmentation in human vision K-mean clustering Mean-shift Graph-cut Reading: Chapters 14 [FP] Some slides of this lectures are courtesy of prof F. Li,

More information

Topics to be Covered in the Rest of the Semester. CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester

Topics to be Covered in the Rest of the Semester. CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester Topics to be Covered in the Rest of the Semester CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester Charles Stewart Department of Computer Science Rensselaer Polytechnic

More information

3D Point Cloud Segmentation Using a Fully Connected Conditional Random Field

3D Point Cloud Segmentation Using a Fully Connected Conditional Random Field 07 5th European Signal Processing Conference (EUSIPCO) 3D Point Cloud Segmentation Using a Fully Connected Conditional Random Field Xiao Lin Josep R.Casas Montse Pardás Abstract Traditional image segmentation

More information

3D Point Cloud Segmentation Using a Fully Connected Conditional Random Field

3D Point Cloud Segmentation Using a Fully Connected Conditional Random Field 3D Point Cloud Segmentation Using a Fully Connected Conditional Random Field Xiao Lin Image Processing Group Technical University of Catalonia (UPC) Barcelona, Spain Josep R.Casas Image Processing Group

More information

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller 3D Computer Vision Depth Cameras Prof. Didier Stricker Oliver Wasenmüller Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

Real-Time Plane Segmentation and Obstacle Detection of 3D Point Clouds for Indoor Scenes

Real-Time Plane Segmentation and Obstacle Detection of 3D Point Clouds for Indoor Scenes Real-Time Plane Segmentation and Obstacle Detection of 3D Point Clouds for Indoor Scenes Zhe Wang, Hong Liu, Yueliang Qian, and Tao Xu Key Laboratory of Intelligent Information Processing && Beijing Key

More information

Tri-modal Human Body Segmentation

Tri-modal Human Body Segmentation Tri-modal Human Body Segmentation Master of Science Thesis Cristina Palmero Cantariño Advisor: Sergio Escalera Guerrero February 6, 2014 Outline 1 Introduction 2 Tri-modal dataset 3 Proposed baseline 4

More information

ECSE 626 Course Project : A Study in the Efficient Graph-Based Image Segmentation

ECSE 626 Course Project : A Study in the Efficient Graph-Based Image Segmentation ECSE 626 Course Project : A Study in the Efficient Graph-Based Image Segmentation Chu Wang Center for Intelligent Machines chu.wang@mail.mcgill.ca Abstract In this course project, I will investigate into

More information

Scene Segmentation by Color and Depth Information and its Applications

Scene Segmentation by Color and Depth Information and its Applications Scene Segmentation by Color and Depth Information and its Applications Carlo Dal Mutto Pietro Zanuttigh Guido M. Cortelazzo Department of Information Engineering University of Padova Via Gradenigo 6/B,

More information

CS 664 Segmentation. Daniel Huttenlocher

CS 664 Segmentation. Daniel Huttenlocher CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical

More information

Articulated Pose Estimation with Flexible Mixtures-of-Parts

Articulated Pose Estimation with Flexible Mixtures-of-Parts Articulated Pose Estimation with Flexible Mixtures-of-Parts PRESENTATION: JESSE DAVIS CS 3710 VISUAL RECOGNITION Outline Modeling Special Cases Inferences Learning Experiments Problem and Relevance Problem:

More information

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

More information

CS395T paper review. Indoor Segmentation and Support Inference from RGBD Images. Chao Jia Sep

CS395T paper review. Indoor Segmentation and Support Inference from RGBD Images. Chao Jia Sep CS395T paper review Indoor Segmentation and Support Inference from RGBD Images Chao Jia Sep 28 2012 Introduction What do we want -- Indoor scene parsing Segmentation and labeling Support relationships

More information

Object Reconstruction

Object Reconstruction B. Scholz Object Reconstruction 1 / 39 MIN-Fakultät Fachbereich Informatik Object Reconstruction Benjamin Scholz Universität Hamburg Fakultät für Mathematik, Informatik und Naturwissenschaften Fachbereich

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Superpixels Generating from the Pixel-based K-Means Clustering

Superpixels Generating from the Pixel-based K-Means Clustering Superpixels Generating from the Pixel-based K-Means Clustering Shang-Chia Wei, Tso-Jung Yen Institute of Statistical Science Academia Sinica Taipei, Taiwan 11529, R.O.C. wsc@stat.sinica.edu.tw, tjyen@stat.sinica.edu.tw

More information

Light Field Occlusion Removal

Light Field Occlusion Removal Light Field Occlusion Removal Shannon Kao Stanford University kaos@stanford.edu Figure 1: Occlusion removal pipeline. The input image (left) is part of a focal stack representing a light field. Each image

More information

Handheld scanning with ToF sensors and cameras

Handheld scanning with ToF sensors and cameras Handheld scanning with ToF sensors and cameras Enrico Cappelletto, Pietro Zanuttigh, Guido Maria Cortelazzo Dept. of Information Engineering, University of Padova enrico.cappelletto,zanuttigh,corte@dei.unipd.it

More information

Grouping and Segmentation

Grouping and Segmentation 03/17/15 Grouping and Segmentation Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Today s class Segmentation and grouping Gestalt cues By clustering (mean-shift) By boundaries (watershed)

More information

Superpixels and Supervoxels in an Energy Optimization Framework

Superpixels and Supervoxels in an Energy Optimization Framework Superpixels and Supervoxels in an Energy Optimization Framework Olga Veksler, Yuri Boykov, and Paria Mehrani Computer Science Department, University of Western Ontario London, Canada {olga,yuri,pmehrani}@uwo.ca

More information

DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION

DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION 2012 IEEE International Conference on Multimedia and Expo Workshops DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION Yasir Salih and Aamir S. Malik, Senior Member IEEE Centre for Intelligent

More information

A Keypoint Descriptor Inspired by Retinal Computation

A Keypoint Descriptor Inspired by Retinal Computation A Keypoint Descriptor Inspired by Retinal Computation Bongsoo Suh, Sungjoon Choi, Han Lee Stanford University {bssuh,sungjoonchoi,hanlee}@stanford.edu Abstract. The main goal of our project is to implement

More information

Optimizing Monocular Cues for Depth Estimation from Indoor Images

Optimizing Monocular Cues for Depth Estimation from Indoor Images Optimizing Monocular Cues for Depth Estimation from Indoor Images Aditya Venkatraman 1, Sheetal Mahadik 2 1, 2 Department of Electronics and Telecommunication, ST Francis Institute of Technology, Mumbai,

More information

Room Reconstruction from a Single Spherical Image by Higher-order Energy Minimization

Room Reconstruction from a Single Spherical Image by Higher-order Energy Minimization Room Reconstruction from a Single Spherical Image by Higher-order Energy Minimization Kosuke Fukano, Yoshihiko Mochizuki, Satoshi Iizuka, Edgar Simo-Serra, Akihiro Sugimoto, and Hiroshi Ishikawa Waseda

More information

CS 664 Image Matching and Robust Fitting. Daniel Huttenlocher

CS 664 Image Matching and Robust Fitting. Daniel Huttenlocher CS 664 Image Matching and Robust Fitting Daniel Huttenlocher Matching and Fitting Recognition and matching are closely related to fitting problems Parametric fitting can serve as more restricted domain

More information

Multi-view stereo. Many slides adapted from S. Seitz

Multi-view stereo. Many slides adapted from S. Seitz Multi-view stereo Many slides adapted from S. Seitz Beyond two-view stereo The third eye can be used for verification Multiple-baseline stereo Pick a reference image, and slide the corresponding window

More information

Title: Adaptive Region Merging Segmentation of Airborne Imagery for Roof Condition Assessment. Abstract:

Title: Adaptive Region Merging Segmentation of Airborne Imagery for Roof Condition Assessment. Abstract: Title: Adaptive Region Merging Segmentation of Airborne Imagery for Roof Condition Assessment Abstract: In order to perform residential roof condition assessment with very-high-resolution airborne imagery,

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

Superpixels and Polygons using Simple Non-Iterative Clustering

Superpixels and Polygons using Simple Non-Iterative Clustering Superpixels and Polygons using Simple Non-Iterative Clustering Radhakrishna Achanta and Sabine Süsstrunk School of Computer and Communication Sciences (IC) École Polytechnique Fédérale de Lausanne (EPFL)

More information

Specular Reflection Separation using Dark Channel Prior

Specular Reflection Separation using Dark Channel Prior 2013 IEEE Conference on Computer Vision and Pattern Recognition Specular Reflection Separation using Dark Channel Prior Hyeongwoo Kim KAIST hyeongwoo.kim@kaist.ac.kr Hailin Jin Adobe Research hljin@adobe.com

More information

Discrete-Continuous Gradient Orientation Estimation for Faster Image Segmentation

Discrete-Continuous Gradient Orientation Estimation for Faster Image Segmentation Discrete-Continuous Gradient Orientation Estimation for Faster Image Segmentation Michael Donoser and Dieter Schmalstieg Institute for Computer Graphics and Vision Graz University of Technology {donoser,schmalstieg}@icg.tugraz.at

More information

Clustering Part 4 DBSCAN

Clustering Part 4 DBSCAN Clustering Part 4 Dr. Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida, Gainesville DBSCAN DBSCAN is a density based clustering algorithm Density = number of

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction

More information

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013 Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today: - Capturing scene depth

More information

Project Updates Short lecture Volumetric Modeling +2 papers

Project Updates Short lecture Volumetric Modeling +2 papers Volumetric Modeling Schedule (tentative) Feb 20 Feb 27 Mar 5 Introduction Lecture: Geometry, Camera Model, Calibration Lecture: Features, Tracking/Matching Mar 12 Mar 19 Mar 26 Apr 2 Apr 9 Apr 16 Apr 23

More information

Recognizing Apples by Piecing Together the Segmentation Puzzle

Recognizing Apples by Piecing Together the Segmentation Puzzle Recognizing Apples by Piecing Together the Segmentation Puzzle Kyle Wilshusen 1 and Stephen Nuske 2 Abstract This paper presents a system that can provide yield estimates in apple orchards. This is done

More information

Keywords Image Segmentation, Pixels, Superpixels, Superpixel Segmentation Methods

Keywords Image Segmentation, Pixels, Superpixels, Superpixel Segmentation Methods Volume 4, Issue 9, September 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Superpixel

More information

HOUGH TRANSFORM CS 6350 C V

HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges

More information

Removing Moving Objects from Point Cloud Scenes

Removing Moving Objects from Point Cloud Scenes Removing Moving Objects from Point Cloud Scenes Krystof Litomisky and Bir Bhanu University of California, Riverside krystof@litomisky.com, bhanu@ee.ucr.edu Abstract. Three-dimensional simultaneous localization

More information

A FAST SEGMENTATION-DRIVEN ALGORITHM FOR ACCURATE STEREO CORRESPONDENCE. Stefano Mattoccia and Leonardo De-Maeztu

A FAST SEGMENTATION-DRIVEN ALGORITHM FOR ACCURATE STEREO CORRESPONDENCE. Stefano Mattoccia and Leonardo De-Maeztu A FAST SEGMENTATION-DRIVEN ALGORITHM FOR ACCURATE STEREO CORRESPONDENCE Stefano Mattoccia and Leonardo De-Maeztu University of Bologna, Public University of Navarre ABSTRACT Recent cost aggregation strategies

More information

University of Florida CISE department Gator Engineering. Clustering Part 4

University of Florida CISE department Gator Engineering. Clustering Part 4 Clustering Part 4 Dr. Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida, Gainesville DBSCAN DBSCAN is a density based clustering algorithm Density = number of

More information

BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION INTRODUCTION

BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION INTRODUCTION BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION Ruijin Ma Department Of Civil Engineering Technology SUNY-Alfred Alfred, NY 14802 mar@alfredstate.edu ABSTRACT Building model reconstruction has been

More information

arxiv: v1 [cs.cv] 16 Sep 2013

arxiv: v1 [cs.cv] 16 Sep 2013 Noname manuscript No. (will be inserted by the editor) SEEDS: Superpixels Extracted via Energy-Driven Sampling Michael Van den Bergh Xavier Boix Gemma Roig Luc Van Gool arxiv:1309.3848v1 [cs.cv] 16 Sep

More information

3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava

3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava 3D Computer Vision Dense 3D Reconstruction II Prof. Didier Stricker Christiano Gava Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

Lecture 10 Dense 3D Reconstruction

Lecture 10 Dense 3D Reconstruction Institute of Informatics Institute of Neuroinformatics Lecture 10 Dense 3D Reconstruction Davide Scaramuzza 1 REMODE: Probabilistic, Monocular Dense Reconstruction in Real Time M. Pizzoli, C. Forster,

More information

A Statistical Consistency Check for the Space Carving Algorithm.

A Statistical Consistency Check for the Space Carving Algorithm. A Statistical Consistency Check for the Space Carving Algorithm. A. Broadhurst and R. Cipolla Dept. of Engineering, Univ. of Cambridge, Cambridge, CB2 1PZ aeb29 cipolla @eng.cam.ac.uk Abstract This paper

More information

Urban Scene Segmentation, Recognition and Remodeling. Part III. Jinglu Wang 11/24/2016 ACCV 2016 TUTORIAL

Urban Scene Segmentation, Recognition and Remodeling. Part III. Jinglu Wang 11/24/2016 ACCV 2016 TUTORIAL Part III Jinglu Wang Urban Scene Segmentation, Recognition and Remodeling 102 Outline Introduction Related work Approaches Conclusion and future work o o - - ) 11/7/16 103 Introduction Motivation Motivation

More information

Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos

Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos Sung Chun Lee, Chang Huang, and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA sungchun@usc.edu,

More information

Incremental compact 3D maps of planar patches from RGBD points

Incremental compact 3D maps of planar patches from RGBD points Incremental compact 3D maps of planar patches from RGBD points Juan Navarro and José M. Cañas Universidad Rey Juan Carlos, Spain Abstract. The RGBD sensors have opened the door to low cost perception capabilities

More information

Joint Color and Depth Segmentation Based on Region Merging and Surface Fitting

Joint Color and Depth Segmentation Based on Region Merging and Surface Fitting Joint Color and Depth Segmentation Based on Region Merging and Surface Fitting Giampaolo Pagnutti and Pietro Zanuttigh Department of Information Engineering, University of Padova, Via Gradenigo 6B, Padova,

More information

Operators-Based on Second Derivative double derivative Laplacian operator Laplacian Operator Laplacian Of Gaussian (LOG) Operator LOG

Operators-Based on Second Derivative double derivative Laplacian operator Laplacian Operator Laplacian Of Gaussian (LOG) Operator LOG Operators-Based on Second Derivative The principle of edge detection based on double derivative is to detect only those points as edge points which possess local maxima in the gradient values. Laplacian

More information

Mixture Models and EM

Mixture Models and EM Mixture Models and EM Goal: Introduction to probabilistic mixture models and the expectationmaximization (EM) algorithm. Motivation: simultaneous fitting of multiple model instances unsupervised clustering

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information