ICP and 3D-Reconstruction

Size: px
Start display at page:

Download "ICP and 3D-Reconstruction"

Transcription

1 N. Slottke, H. Linne 1 Nikolas Slottke 1 Hendrik Linne 2 {7slottke, 7linne}@informatik.uni-hamburg.de Fakultät für Mathematik, Informatik und Naturwissenschaften Technische Aspekte Multimodaler Systeme 17. December 2012

2 N. Slottke, H. Linne 2 Outline Introduction ICP Motivation Challenge 3D-Modelling Iterative Closest Point Algorithm Improving Conclusion Structured Light Kinect Camera

3 N. Slottke, H. Linne 3 Outline Introduction ICP Motivation Challenge 3D-Modelling Iterative Closest Point Algorithm Improving Conclusion Structured Light Kinect Camera

4 N. Slottke, H. Linne 4 Introduction - Motivation At the beginning... Why should we use three dimensional reconstruction? The answer is rather simple: Every time we need a virtual model of something, to handle it in a virtual way.

5 N. Slottke, H. Linne 5 Introduction - Motivation One task could be to create clay models at a rst step and later on a virtual CAD-model based on the real clay model.

6 N. Slottke, H. Linne 6 Introduction - Motivation Another task could be the medical usage of three dimensional models.

7 N. Slottke, H. Linne 7 Introduction - Motivation For robotics it is interesting, too. One can create models of the environment, where the robot is acting in. An actual project is the RACE project at the University of Hamburg. Robustness by Autonomous Competence Enhancement. The PR2 is used for this. Knowledge driven. High-level and low-level experiences. High-level: goals, task and behaviors. Low-level: sensory data.

8 N. Slottke, H. Linne 8 Introduction - Challenge The handling. How do we create model? Digitize the environment (or maybe only one object of interest). Take a picture with depth information. Create a point cloud of this picture.... Later on, one can create a surface for the model.

9 N. Slottke, H. Linne 9 Introduction - Challenge And then? The environment moves... or the robot moves... or the robot moves the environment. Again, digitize the environment. But now, we have two point clouds. Each from another perspective. We want to match them to get a whole model. Dicult because of the dierent perspectives. Some points belong together, some not. Manual matching is not discussible. One need an automatic matching.

10 N. Slottke, H. Linne 10 Introduction - Challenge Outline Introduction ICP Motivation Challenge 3D-Modelling Iterative Closest Point Algorithm Improving Conclusion Structured Light Kinect Camera

11 N. Slottke, H. Linne 11 ICP - Iterative Closest Point Algorithm Iterative Closest Point Algorithm This algorithm was rst published by Paul J. Besl and Neil D. McKay in the scientic paper A Method for Registration of 3-D Shapes in the year It depends on the prior work Closed-form solution of absolute orientation using unit quaternions by Berthold K.P. Horn from the University of Hawaii from the year 1987.

12 N. Slottke, H. Linne 12 ICP - Iterative Closest Point Algorithm Prior assumptions The prior point cloud, which is called a model. A model X, which is the prior point cloud, with N x points, an individual point x, where X = { x i } with 0 i < N x. The new point cloud, which is called a data set. A data set P, which is the new point cloud to be matched, with N p points, an individual point p, where P = { p i } with 0 i < N p.

13 N. Slottke, H. Linne 13 ICP - Iterative Closest Point Algorithm The initialized settings Before the algorithm can start, there are several settings to be done. Starting the iteration at k = 0. Setting P k = P, means here P 0 = P. Setting q k = [1, 0, 0, 0, 0, 0, 0], the rigid transformation vector. Specify the threshold τ.

14 N. Slottke, H. Linne 14 ICP - Iterative Closest Point Algorithm Iterative Closest Point Algorithm ICP-Algorithm 1. Compute the closest point pairing: Y k = C(P k, X ) 2. Compute the rigid registration: ( q k, d k ) = Q(P 0, Y k ) 3. Apply the registration: P k+1 = q k (P 0 ) 4. Terminate the iteration when the change in mean-square error falls below τ: d k d k+1 < τ

15 N. Slottke, H. Linne 15 ICP - Iterative Closest Point Algorithm Compute the closest points The distance metric d between an individual point p and X is denoted by d( p, X ) = min x X x p. The closest point in X is denoted y such that d( p, y) = d( p, X ), where y X. This is performed for each point in P. Y denotes the resulting set of closest points and C the closest point operator: Y = C(P, X ).

16 N. Slottke, H. Linne 16 ICP - Iterative Closest Point Algorithm Compute the rigid registration The Q-operator is applied to get q k and d k. d k is the mean square point matching error and computed by d k = 1 Np Np i=1 y ik p ik 2. q k is the transformation vector in the registration state space. q k = [scale, roll, pitch, yaw, x, y, z] As the iterative closest point algorithm proceeds, a sequence of registration vectors is generated: q 1, q 2, q 3, q 4, q 5, q 6...

17 N. Slottke, H. Linne 17 ICP - Improving - Simple Possibilities What costs? With a look on the algorithm, one can see that nding the pairing is the most computationally expensive step. In the worst case with a cost of O(N p N x ). Therefore it isn't useful for large datasets. To improve the algorithm there are two immediate possibilities: Don't take the square root at the euclidean distance calculations. Computing the distance only if the partial sum is still smaller than the current minimal distance. But it is still O(N p N x )!

18 N. Slottke, H. Linne 18 ICP - Improving - k-d Trees Using k-d trees One can use k-dimensional trees as data structure to perform the nearest neighbor searches in less than linear time. k-dimensional binary tree with given dim-dimensional points imposes a spatial decomposition which prunes much of the search space. Similar to regular binary trees with the exception that the used key changes between levels of the tree. It is used the (i mod dim) coordinate at each level i.

19 N. Slottke, H. Linne 19 ICP - Improving - k-d Trees Naive k-d tree A naive k-d tree with dim = 2, a = (1, 2), b = (3, 4), c = (5, 6) and d = (7, 8). The output can dier depending on the insertion order. (1) Left tree: insertion order abcd. (2) Right tree: insertion order bcad.

20 N. Slottke, H. Linne 20 ICP - Improving - k-d Trees Median k-d tree Median k-d tree creation. 1. If cardinality of {P i } = 1 create leaf a node. 2. Else Level i is even: Split {P i } in two subsets with a vertical line through the median x-coordinate of the points in {P i }. Let P1 be the rest of points to the left and P2 the set of points to the right. Points exactly on the line belong to P1. Level i is uneven: Split {P i } into two subsets with a horizontal line through the median y-coordinate of the points in {P i }. Let P1 be the set of points below and P2 be the points above. Points exactly on the line belong to P1.

21 N. Slottke, H. Linne 21 ICP - Improving - k-d Trees Median k-d tree A median k-d tree with dim = 2, a = (1, 2), b = (3, 4), c = (5, 6) and d = (7, 8).

22 ICP - Improving - Accelerated ICP Accelerated ICP (1) As the iterative closest point algorithm proceeds, a sequence of registration vectors is generated: q 1, q 2, q 3, q 4, q 5, q 6... This traces out a path in the registration state space toward a locally optimal shape match. Let δθ be a suciently small angular tolerance. Look back to the last three registration state vectors q k, q k 1, q k 2. Compute a linear approximation. Compute a parabolic interpolant. This gives a possible linear update, based on the zero crossing of the line and a possible parabola update, based on the extremum point of the parabola. N. Slottke, H. Linne 22

23 N. Slottke, H. Linne 23 ICP - Improving - Accelerated ICP Accelerated ICP (2) Consistent direction allows acceleration of the ICP algorithm.

24 N. Slottke, H. Linne 24 ICP - Improving - Accelerated ICP What costs again? Is there an improvement by using the k-d tree or accelerating the ICP? Yes, remember costs of O(N p N x ). The costs to construct a k-d tree is O(nlogn). Median nding can be done in O(n) complexity. Given this complexity for each tree level and that there are logn levels the total construction time is O(nlogn). The improvement by using the accelerated ICP depends on the point clouds and their poses. A nominal run of more than 50 basic ICP iterations is typically accelerated to 15 or 20 iterations.

25 N. Slottke, H. Linne 25 ICP - Conclusion Disadvantages The rst disadvantage is the time complexity of O(N p N x ) using the basic ICP algorithm. One have to improve the ICP algorithm when using it for real time applications. Another disadvantage appears if the two point clouds to be matched are far away from each other with a very dierent pose. By using the mean square error function the ICP algorithm terminates at the local minimum and not at the global minimum. The matching will be incorrect. One need point clouds which dier not to much.

26 N. Slottke, H. Linne 26 ICP - Conclusion Alternatives There are several alternatives can be used. These alternatives are image processing algorithms. Surface based methods template matching fourier method Feature based methods using spatial relations invariant descriptors perspective reconstruction

27 N. Slottke, H. Linne 27 ICP - Conclusion Alternatives - Using SIFT Features SIFT Features are robust against scaling, rotating and translating. Also a partial coverage is tolerated.

28 N. Slottke, H. Linne 28 ICP - Conclusion Alternatives - insight3d From the given images, the geometric is reconstructed and the camera positions are computed.

29 N. Slottke, H. Linne 29 ICP - Conclusion Alternatives - insight3d With the informations about the camera positions, one can create a three dimensional model only with three images.

30 N. Slottke, H. Linne 30 ICP - Conclusion Outline Introduction ICP Motivation Challenge 3D-Modelling Iterative Closest Point Algorithm Improving Conclusion Structured Light Kinect Camera

31 N. Slottke, H. Linne 31 3D-Modelling - Structured Light Structured Light Structured light is the process of projecting a known pattern of pixels on to a scene and extracting depth information. Often grids or horizontal bars are used Deformation of these patterns allows vision systems to extract depth information Sometimes invisble or impercetible light is used, e.g infrared light to prevent interferring with other computer vision systems Also alternating between two exactly oppsite pattern at high framerates is possible

32 N. Slottke, H. Linne 32 3D-Modelling - Kinect Camera Kinect Camera Kinect is a motion sensing input device. Originally developed for Microsofts XBOX360 under the Codename Project Natal Launched in November 2010 RGB camera with VGA resolution (640x480) IR depth nding camera in VGA resolution IR laser projector for depth nding (structured light) Depth ranging limits are m

33 N. Slottke, H. Linne 33 3D-Modelling - Abstract enables a user holding and moving a Kinect camera to create a reconstruction of an indoor scene and interact with the scene. Tracking of the 3D Pose of the Kinect camera Reconstructing a precise 3D model of the scene in real-time Object segmentation and user interaction in front of the sensor Enabling real-time multi-touch interaction anywhere

34 3D-Modelling - Abstract I A) User in co ee table scene I B) Phong shaded reconstructed 3D model I C) 3D model texture mapped with Kinect RGB data and real-time particles simulated on the model I D) Multi-touch interactions performed on any reconstructed surface I E) Real-time segmentation and 3D tracking of an object N. Slottke, H. Linne 34

35 N. Slottke, H. Linne 35 3D-Modelling - Introduction Kinect has made depth cameras very popular and accessible to all, especially for researchers The Kinect camera generates real-time depth maps with discrete range measurements This data can be reprojected as a set of 3D points (point cloud) Kinect depth data is compelling compared to other commercially available depth cameras but also very noisy

36 N. Slottke, H. Linne 36 3D-Modelling - Introduction Generation of a 3D model from one viewpoint has to make strong assumptions about neighboring points This leads to a noisy and incomplete low-quality mesh To create a complete or even watertight 3D model dierent viewpoints must be captured and fused into a single representation

37 N. Slottke, H. Linne 37 3D-Modelling - Introduction takes live depth data from a moving Kinect camera and creates a single high-quality 3D model A user can move in any indoor environment with the camera and reconstruct the physical scene within seconds continuously tracks the 6 degrees-of-freedom (6DOF) pose of the camera New viewpoints are fused into the global representation of the scene A GPU processing pipeline allows for accurate camera tracking and surface reconstruction in real-time

38 3D-Modelling - Introduction I A) RGB Image of scene I B) Normals extracted from raw Kinect depth map I C) 3D mesh created from a single viewpoint/depth map I D) and E) 3D model generated from (D) and rendered with Phong shading (E) N. Slottke, H. Linne 38

39 N. Slottke, H. Linne 39 3D-Modelling - Introduction makes the Kinect camera a low-cost handheld scanner Also the reconstructed 3D model can be leveraged for geometry-aware augmented reality and physics-based interaction This user interaction in front of the camera is a fundamental challenge The scene itself cannot any longer be assumed to be static

40 N. Slottke, H. Linne 40 3D-Modelling - Related Work Reconstructing geometry from active sensors, passive cameras, etc. is a well-studied area of research in computer vision and also in robotics SLAM (Self Localization and Mapping) is one example in robotics and it is basically what does when tracking the 6DOF pose of the Kinect camera and reconstructing a map/model

41 N. Slottke, H. Linne 41 3D-Modelling - Related Work Challenges and Features of Interactive Rates No explicit feature detection High quality reconstruction for geometry Dynamic interaction assumed Infrastructure-less Room scale

42 N. Slottke, H. Linne 42 3D-Modelling - Moving the Kinect camera creates dierent viewpoints Holes in the model are lled Even small motions, caused by camera shake result in new viewpoints This creates an eect similar to image superresolution (subpixel) Texture mapping through Kinect RGB data can be applied

43 3D-Modelling - I A) 3D reconstrucion with surface normals I B) Texture mapped model I C) System monitors real-time changes in scene and colors large changes yellow (Segmentation) N. Slottke, H. Linne 43

44 N. Slottke, H. Linne 44 3D-Modelling - Low-cost Handheld Scanning A basic but compelling task for is as a low-cost handheld scanner There are some alternatives fulllling this task with passive and active cameras But the speed and quality with such low-cost hardware has not been demonstrated before Reconstructed 3D models captured with can be imported into CAD or other 3D modeling applications It is also possible to reverse the 6DOF tracking by moving an object by hand and have a static Kinect camera

45 N. Slottke, H. Linne 45 3D-Modelling - Object Segmentation Task: Scan a certain smaller object in the entire scene allows this by scanning the scene rst The user can then interact with the object to be scanned by moving it The object can be seperated cleanly from the background model This allows rapid segmentation of objects without a GUI To detect those Objects, moved by the user, the ICP Outliers are used

46 N. Slottke, H. Linne 46 3D-Modelling - Geometry-Aware Augmented Reality (AR) Beyond scanning allows realistic forms of AR 3D Virtual world is overlaid and interacts with the real-world representation The live 3D model allows virtual graphics to be precisely occluded by the real-world Also the virtual Objects can cast shadows on the real geometry

47 3D-Modelling - Geometry-Aware Augmented Reality (AR) I A) - D) Virtual Sphere composited onto texture mapped 3D model I E) Occlusion handling using live depth map I F) Occlusion handling using 3D reconstruction N. Slottke, H. Linne 47

48 N. Slottke, H. Linne 48 3D-Modelling - GPU Implementation The Approach for real-time camera tracking and surface reconstruction is based on two well-studied algorithms. ICP and 'A Volumetric Method for Building Complex Models from Range Images' They have been designed for highly parallel execution which GPUs are predestined for The GPU pipeline for consists of four main stages

49 N. Slottke, H. Linne 49 3D-Modelling - GPU Implementation Four main stages Depth Map Conversion Camera Tracking Volumetric Integration Raycasting Each step is executed on the GPU using the CUDA language.

50 N. Slottke, H. Linne 50 3D-Modelling - GPU Implementation Depth Map Conversion At time i each CUDA thread operates in parallel on a seperate pixel u = (x, y) in the incoming depth map D i (u) With the intrinsic calibration matrix K of the Kinect Infrared camera, each CUDA thread reprojects the specic depth measurement as a 3D vertex in the camera's coordinate system: v i (u) = D i (u)k 1 [u, 1] Corresponding normal vectors for each vertex are computed using neighboring reprojected points This results in a single normal map N i

51 N. Slottke, H. Linne 51 3D-Modelling - GPU Implementation Depth Map Conversion The 6DOF camera pose is a rigid body transform with a 3x3 rotation matrix and a 3D translation vector With this transform we can convert a vertex and a normal into global coordinates forming a global coordinate system for the reconstruction

52 N. Slottke, H. Linne 52 3D-Modelling - GPU Implementation Camera Tracking The camera tracking in is realised using the ICP algorithm The algorithm is used to estimate the transform which closely aligns the current oriented points with those from the previous frame of the Kinect camera This gives a 6DOF transform which can be incrementally applied to give the global camera pose T i

53 N. Slottke, H. Linne 53 3D-Modelling - GPU Implementation Camera Tracking Each GPU thread tests compatibility of corresponding points found by ICP to reject those who are not within an Euclidian distance treshold With the set of corresponding points the output of each ICP Iteration is a single relative transformation matrix T rel that minimizes the point-to-plane error

54 N. Slottke, H. Linne 54 3D-Modelling - GPU Implementation Camera Tracking One of the key novel contributions of this GPU-based approach is that ICP is performed on all measurements provided by the 640x480 Kinect depth map There is no sparse sampling of points or the need of feature extraction This also makes the camera tracking dense

55 N. Slottke, H. Linne 55 3D-Modelling - GPU Implementation Volumetric Integration Use of Signed Distance Functions (SDF) Integrate 3D vertices into a 3D voxel grid, specifying a relative distance to the actual surface Positive values in-front of the surface, negative behind Zero-Crossings dene surface interface Those SDF values are generated for voxels

56 N. Slottke, H. Linne 56 3D-Modelling - GPU Implementation Volumetric Integration Big challenge: achieve real-time rates Volumetric representation on a 3D voxel grid Not memory ecient! volume with 32-bit voxels requires 512MB memory

57 N. Slottke, H. Linne 57 3D-Modelling - GPU Implementation Volumetric Integration But speed ecient! Aligned memory access from parallel threads can be performed very quickly A complete sweep on a volume can be done in 2ms on modern GPUs (NVidia GTX470)

58 N. Slottke, H. Linne 58 3D-Modelling - GPU Implementation Raycasting GPU based Raycaster to generate views of the surface Basically each GPU thread walks a single ray and renders a single pixel in the output image The surface is extracted by observing zero-crossings of the SDF along the ray

59 N. Slottke, H. Linne 59 3D-Modelling - Conclusion is a real-time 3D reconstruction and interaction system with: A novel GPU pipeline for 3D tracking, reconstruction, segmentation, rendering and interaction Core novel uses, such as low-cost scanning and advanced Augmented Reality with physics based interaction New methods for segmenting, tracking and reconstructing dynamic users and the background simultaneously

60 N. Slottke, H. Linne 60 3D-Modelling - Video Video

61 N. Slottke, H. Linne 61 3D-Modelling - References [1] P. J. Besl and N. D. McKay, A method for registration of 3-d shapes, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 14, pp , February [2] B. K. P. Horn, Closed-form solution of absolute orientation using unit quaternions, Journal of the Optical Society of America A (JOSA A), vol. 4, p. 629, April [3] Z. Yaniv, Rigid registration: The iterative closest point algorithm, handout, The Hebrew University, School of Engineering and Computer Science, November [4] P. Stelldinger, Computer vision. Modulscript. [5] C. clay model of BMW 5 GT. Available online at bmw-5-series-gt-concept-clay-model_1.jpg, visited on November [6] D. D. Breen, Drexel geometric biomedical computing group. Available online at https: // visited on December [7] N. Burrus, Kinect rgb demo v0.4.0, July Available online at KinectRgbDemoV4?from=Research.KinectRgbDemoV4, visited on December [8] L. Mach, insight3d. Available online at visited on October [9] P. C. et al., Meshlab, 2012 August. Available online at visited on December [10] S. I. et al., Kinectfusion: real-time 3d reconstruction and interaction using a moving depth camera., in UIST (J. S. Pierce, M. Agrawala, and S. R. Klemmer, eds.), pp , ACM, Available online at conf/uist/uist2011.html#izadikhmnkshfdf11.

62 N. Slottke, H. Linne 62 3D-Modelling - Thank you for listening! Nikolas Slottke 1 Hendrik Linne 2 {7slottke, 7linne}@informatik.uni-hamburg.de Fakultät für Mathematik, Informatik und Naturwissenschaften Technische Aspekte Multimodaler Systeme

3D Photography: Active Ranging, Structured Light, ICP

3D Photography: Active Ranging, Structured Light, ICP 3D Photography: Active Ranging, Structured Light, ICP Kalin Kolev, Marc Pollefeys Spring 2013 http://cvg.ethz.ch/teaching/2013spring/3dphoto/ Schedule (tentative) Feb 18 Feb 25 Mar 4 Mar 11 Mar 18 Mar

More information

Object Reconstruction

Object Reconstruction B. Scholz Object Reconstruction 1 / 39 MIN-Fakultät Fachbereich Informatik Object Reconstruction Benjamin Scholz Universität Hamburg Fakultät für Mathematik, Informatik und Naturwissenschaften Fachbereich

More information

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller 3D Computer Vision Depth Cameras Prof. Didier Stricker Oliver Wasenmüller Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

Outline. 1 Why we re interested in Real-Time tracking and mapping. 3 Kinect Fusion System Overview. 4 Real-time Surface Mapping

Outline. 1 Why we re interested in Real-Time tracking and mapping. 3 Kinect Fusion System Overview. 4 Real-time Surface Mapping Outline CSE 576 KinectFusion: Real-Time Dense Surface Mapping and Tracking PhD. work from Imperial College, London Microsoft Research, Cambridge May 6, 2013 1 Why we re interested in Real-Time tracking

More information

3D Photography: Stereo

3D Photography: Stereo 3D Photography: Stereo Marc Pollefeys, Torsten Sattler Spring 2016 http://www.cvg.ethz.ch/teaching/3dvision/ 3D Modeling with Depth Sensors Today s class Obtaining depth maps / range images unstructured

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

5.2 Surface Registration

5.2 Surface Registration Spring 2018 CSCI 621: Digital Geometry Processing 5.2 Surface Registration Hao Li http://cs621.hao-li.com 1 Acknowledgement Images and Slides are courtesy of Prof. Szymon Rusinkiewicz, Princeton University

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Structured light 3D reconstruction

Structured light 3D reconstruction Structured light 3D reconstruction Reconstruction pipeline and industrial applications rodola@dsi.unive.it 11/05/2010 3D Reconstruction 3D reconstruction is the process of capturing the shape and appearance

More information

3D object recognition used by team robotto

3D object recognition used by team robotto 3D object recognition used by team robotto Workshop Juliane Hoebel February 1, 2016 Faculty of Computer Science, Otto-von-Guericke University Magdeburg Content 1. Introduction 2. Depth sensor 3. 3D object

More information

Surface Registration. Gianpaolo Palma

Surface Registration. Gianpaolo Palma Surface Registration Gianpaolo Palma The problem 3D scanning generates multiple range images Each contain 3D points for different parts of the model in the local coordinates of the scanner Find a rigid

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Mobile Point Fusion. Real-time 3d surface reconstruction out of depth images on a mobile platform

Mobile Point Fusion. Real-time 3d surface reconstruction out of depth images on a mobile platform Mobile Point Fusion Real-time 3d surface reconstruction out of depth images on a mobile platform Aaron Wetzler Presenting: Daniel Ben-Hoda Supervisors: Prof. Ron Kimmel Gal Kamar Yaron Honen Supported

More information

Rigid ICP registration with Kinect

Rigid ICP registration with Kinect Rigid ICP registration with Kinect Students: Yoni Choukroun, Elie Semmel Advisor: Yonathan Aflalo 1 Overview.p.3 Development of the project..p.3 Papers p.4 Project algorithm..p.6 Result of the whole body.p.7

More information

Advances in 3D data processing and 3D cameras

Advances in 3D data processing and 3D cameras Advances in 3D data processing and 3D cameras Miguel Cazorla Grupo de Robótica y Visión Tridimensional Universidad de Alicante Contents Cameras and 3D images 3D data compression 3D registration 3D feature

More information

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information Proceedings of the World Congress on Electrical Engineering and Computer Systems and Science (EECSS 2015) Barcelona, Spain July 13-14, 2015 Paper No. 335 Efficient SLAM Scheme Based ICP Matching Algorithm

More information

Processing 3D Surface Data

Processing 3D Surface Data Processing 3D Surface Data Computer Animation and Visualisation Lecture 12 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing

More information

3D Environment Reconstruction

3D Environment Reconstruction 3D Environment Reconstruction Using Modified Color ICP Algorithm by Fusion of a Camera and a 3D Laser Range Finder The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15,

More information

Processing 3D Surface Data

Processing 3D Surface Data Processing 3D Surface Data Computer Animation and Visualisation Lecture 17 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

3D Object Representations. COS 526, Fall 2016 Princeton University

3D Object Representations. COS 526, Fall 2016 Princeton University 3D Object Representations COS 526, Fall 2016 Princeton University 3D Object Representations How do we... Represent 3D objects in a computer? Acquire computer representations of 3D objects? Manipulate computer

More information

Chaplin, Modern Times, 1936

Chaplin, Modern Times, 1936 Chaplin, Modern Times, 1936 [A Bucket of Water and a Glass Matte: Special Effects in Modern Times; bonus feature on The Criterion Collection set] Multi-view geometry problems Structure: Given projections

More information

Algorithm research of 3D point cloud registration based on iterative closest point 1

Algorithm research of 3D point cloud registration based on iterative closest point 1 Acta Technica 62, No. 3B/2017, 189 196 c 2017 Institute of Thermomechanics CAS, v.v.i. Algorithm research of 3D point cloud registration based on iterative closest point 1 Qian Gao 2, Yujian Wang 2,3,

More information

Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery

Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery 1 Charles TOTH, 1 Dorota BRZEZINSKA, USA 2 Allison KEALY, Australia, 3 Guenther RETSCHER,

More information

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO Stefan Krauß, Juliane Hüttl SE, SoSe 2011, HU-Berlin PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO 1 Uses of Motion/Performance Capture movies games, virtual environments biomechanics, sports science,

More information

3D Editing System for Captured Real Scenes

3D Editing System for Captured Real Scenes 3D Editing System for Captured Real Scenes Inwoo Ha, Yong Beom Lee and James D.K. Kim Samsung Advanced Institute of Technology, Youngin, South Korea E-mail: {iw.ha, leey, jamesdk.kim}@samsung.com Tel:

More information

PART IV: RS & the Kinect

PART IV: RS & the Kinect Computer Vision on Rolling Shutter Cameras PART IV: RS & the Kinect Per-Erik Forssén, Erik Ringaby, Johan Hedborg Computer Vision Laboratory Dept. of Electrical Engineering Linköping University Tutorial

More information

3D Modeling of Objects Using Laser Scanning

3D Modeling of Objects Using Laser Scanning 1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models

More information

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa 3D Scanning Qixing Huang Feb. 9 th 2017 Slide Credit: Yasutaka Furukawa Geometry Reconstruction Pipeline This Lecture Depth Sensing ICP for Pair-wise Alignment Next Lecture Global Alignment Pairwise Multiple

More information

Monocular Tracking and Reconstruction in Non-Rigid Environments

Monocular Tracking and Reconstruction in Non-Rigid Environments Monocular Tracking and Reconstruction in Non-Rigid Environments Kick-Off Presentation, M.Sc. Thesis Supervisors: Federico Tombari, Ph.D; Benjamin Busam, M.Sc. Patrick Ruhkamp 13.01.2017 Introduction Motivation:

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Project Updates Short lecture Volumetric Modeling +2 papers

Project Updates Short lecture Volumetric Modeling +2 papers Volumetric Modeling Schedule (tentative) Feb 20 Feb 27 Mar 5 Introduction Lecture: Geometry, Camera Model, Calibration Lecture: Features, Tracking/Matching Mar 12 Mar 19 Mar 26 Apr 2 Apr 9 Apr 16 Apr 23

More information

Overview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers

Overview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers Augmented reality Overview Augmented reality and applications Marker-based augmented reality Binary markers Textured planar markers Camera model Homography Direct Linear Transformation What is augmented

More information

3D Models from Range Sensors. Gianpaolo Palma

3D Models from Range Sensors. Gianpaolo Palma 3D Models from Range Sensors Gianpaolo Palma Who Gianpaolo Palma Researcher at Visual Computing Laboratory (ISTI-CNR) Expertise: 3D scanning, Mesh Processing, Computer Graphics E-mail: gianpaolo.palma@isti.cnr.it

More information

User-assisted Segmentation and 3D Motion Tracking. Michael Fleder Sudeep Pillai Jeremy Scott

User-assisted Segmentation and 3D Motion Tracking. Michael Fleder Sudeep Pillai Jeremy Scott User-assisted Segmentation and 3D Motion Tracking Michael Fleder Sudeep Pillai Jeremy Scott 3D Object Tracking Virtual reality and animation Imitation in robotics Autonomous driving Augmented reality Motivation

More information

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013 Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today: - Capturing scene depth

More information

KinectFusion: Real-Time Dense Surface Mapping and Tracking

KinectFusion: Real-Time Dense Surface Mapping and Tracking KinectFusion: Real-Time Dense Surface Mapping and Tracking Gabriele Bleser Thanks to Richard Newcombe for providing the ISMAR slides Overview General: scientific papers (structure, category) KinectFusion:

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction

More information

ABSTRACT. KinectFusion is a surface reconstruction method to allow a user to rapidly

ABSTRACT. KinectFusion is a surface reconstruction method to allow a user to rapidly ABSTRACT Title of Thesis: A REAL TIME IMPLEMENTATION OF 3D SYMMETRIC OBJECT RECONSTRUCTION Liangchen Xi, Master of Science, 2017 Thesis Directed By: Professor Yiannis Aloimonos Department of Computer Science

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Multi-view stereo. Many slides adapted from S. Seitz

Multi-view stereo. Many slides adapted from S. Seitz Multi-view stereo Many slides adapted from S. Seitz Beyond two-view stereo The third eye can be used for verification Multiple-baseline stereo Pick a reference image, and slide the corresponding window

More information

International Conference on Communication, Media, Technology and Design. ICCMTD May 2012 Istanbul - Turkey

International Conference on Communication, Media, Technology and Design. ICCMTD May 2012 Istanbul - Turkey VISUALIZING TIME COHERENT THREE-DIMENSIONAL CONTENT USING ONE OR MORE MICROSOFT KINECT CAMERAS Naveed Ahmed University of Sharjah Sharjah, United Arab Emirates Abstract Visualizing or digitization of the

More information

Geometric Reconstruction Dense reconstruction of scene geometry

Geometric Reconstruction Dense reconstruction of scene geometry Lecture 5. Dense Reconstruction and Tracking with Real-Time Applications Part 2: Geometric Reconstruction Dr Richard Newcombe and Dr Steven Lovegrove Slide content developed from: [Newcombe, Dense Visual

More information

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Scan Matching Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Scan Matching Overview Problem statement: Given a scan and a map, or a scan and a scan,

More information

Intensity Augmented ICP for Registration of Laser Scanner Point Clouds

Intensity Augmented ICP for Registration of Laser Scanner Point Clouds Intensity Augmented ICP for Registration of Laser Scanner Point Clouds Bharat Lohani* and Sandeep Sashidharan *Department of Civil Engineering, IIT Kanpur Email: blohani@iitk.ac.in. Abstract While using

More information

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Occlusion Detection of Real Objects using Contour Based Stereo Matching Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,1-3 Machikaneyama-cho, Toyonaka,

More information

Point Cloud Filtering using Ray Casting by Eric Jensen 2012 The Basic Methodology

Point Cloud Filtering using Ray Casting by Eric Jensen 2012 The Basic Methodology Point Cloud Filtering using Ray Casting by Eric Jensen 01 The Basic Methodology Ray tracing in standard graphics study is a method of following the path of a photon from the light source to the camera,

More information

A consumer level 3D object scanning device using Kinect for web-based C2C business

A consumer level 3D object scanning device using Kinect for web-based C2C business A consumer level 3D object scanning device using Kinect for web-based C2C business Geoffrey Poon, Yu Yin Yeung and Wai-Man Pang Caritas Institute of Higher Education Introduction Internet shopping is popular

More information

Let s start with occluding contours (or interior and exterior silhouettes), and look at image-space algorithms. A very simple technique is to render

Let s start with occluding contours (or interior and exterior silhouettes), and look at image-space algorithms. A very simple technique is to render 1 There are two major classes of algorithms for extracting most kinds of lines from 3D meshes. First, there are image-space algorithms that render something (such as a depth map or cosine-shaded model),

More information

ENGN D Photography / Spring 2018 / SYLLABUS

ENGN D Photography / Spring 2018 / SYLLABUS ENGN 2502 3D Photography / Spring 2018 / SYLLABUS Description of the proposed course Over the last decade digital photography has entered the mainstream with inexpensive, miniaturized cameras routinely

More information

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation Advanced Vision Guided Robotics David Bruce Engineering Manager FANUC America Corporation Traditional Vision vs. Vision based Robot Guidance Traditional Machine Vision Determine if a product passes or

More information

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm Computer Vision Group Prof. Daniel Cremers Dense Tracking and Mapping for Autonomous Quadrocopters Jürgen Sturm Joint work with Frank Steinbrücker, Jakob Engel, Christian Kerl, Erik Bylow, and Daniel Cremers

More information

BIL Computer Vision Apr 16, 2014

BIL Computer Vision Apr 16, 2014 BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm

More information

Removing Moving Objects from Point Cloud Scenes

Removing Moving Objects from Point Cloud Scenes Removing Moving Objects from Point Cloud Scenes Krystof Litomisky and Bir Bhanu University of California, Riverside krystof@litomisky.com, bhanu@ee.ucr.edu Abstract. Three-dimensional simultaneous localization

More information

CS664 Lecture #19: Layers, RANSAC, panoramas, epipolar geometry

CS664 Lecture #19: Layers, RANSAC, panoramas, epipolar geometry CS664 Lecture #19: Layers, RANSAC, panoramas, epipolar geometry Some material taken from: David Lowe, UBC Jiri Matas, CMP Prague http://cmp.felk.cvut.cz/~matas/papers/presentations/matas_beyondransac_cvprac05.ppt

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V

More information

Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting

Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting R. Maier 1,2, K. Kim 1, D. Cremers 2, J. Kautz 1, M. Nießner 2,3 Fusion Ours 1

More information

Spatial Data Structures

Spatial Data Structures 15-462 Computer Graphics I Lecture 17 Spatial Data Structures Hierarchical Bounding Volumes Regular Grids Octrees BSP Trees Constructive Solid Geometry (CSG) April 1, 2003 [Angel 9.10] Frank Pfenning Carnegie

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Extrinsic camera calibration method and its performance evaluation Jacek Komorowski 1 and Przemyslaw Rokita 2 arxiv:1809.11073v1 [cs.cv] 28 Sep 2018 1 Maria Curie Sklodowska University Lublin, Poland jacek.komorowski@gmail.com

More information

Visualization of Temperature Change using RGB-D Camera and Thermal Camera

Visualization of Temperature Change using RGB-D Camera and Thermal Camera 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 Visualization of Temperature

More information

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10 Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley

More information

Index C, D, E, F I, J

Index C, D, E, F I, J Index A Ambient light, 12 B Blurring algorithm, 68 Brightness thresholding algorithm float testapp::blur, 70 kinect.update(), 69 void testapp::draw(), 70 void testapp::exit(), 70 void testapp::setup(),

More information

Miniature faking. In close-up photo, the depth of field is limited.

Miniature faking. In close-up photo, the depth of field is limited. Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg

More information

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA Tomoki Hayashi 1, Francois de Sorbier 1 and Hideo Saito 1 1 Graduate School of Science and Technology, Keio University, 3-14-1 Hiyoshi,

More information

5LSH0 Advanced Topics Video & Analysis

5LSH0 Advanced Topics Video & Analysis 1 Multiview 3D video / Outline 2 Advanced Topics Multimedia Video (5LSH0), Module 02 3D Geometry, 3D Multiview Video Coding & Rendering Peter H.N. de With, Sveta Zinger & Y. Morvan ( p.h.n.de.with@tue.nl

More information

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms L17. OCCUPANCY MAPS NA568 Mobile Robotics: Methods & Algorithms Today s Topic Why Occupancy Maps? Bayes Binary Filters Log-odds Occupancy Maps Inverse sensor model Learning inverse sensor model ML map

More information

Application questions. Theoretical questions

Application questions. Theoretical questions The oral exam will last 30 minutes and will consist of one application question followed by two theoretical questions. Please find below a non exhaustive list of possible application questions. The list

More information

Modeling the Virtual World

Modeling the Virtual World Modeling the Virtual World Joaquim Madeira November, 2013 RVA - 2013/2014 1 A VR system architecture Modeling the Virtual World Geometry Physics Haptics VR Toolkits RVA - 2013/2014 2 VR object modeling

More information

Srikumar Ramalingam. Review. 3D Reconstruction. Pose Estimation Revisited. School of Computing University of Utah

Srikumar Ramalingam. Review. 3D Reconstruction. Pose Estimation Revisited. School of Computing University of Utah School of Computing University of Utah Presentation Outline 1 2 3 Forward Projection (Reminder) u v 1 KR ( I t ) X m Y m Z m 1 Backward Projection (Reminder) Q K 1 q Presentation Outline 1 2 3 Sample Problem

More information

L2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods

L2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods L2 Data Acquisition Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods 1 Coordinate Measurement Machine Touch based Slow Sparse Data Complex planning Accurate 2

More information

Introduction to Robotics

Introduction to Robotics Jianwei Zhang zhang@informatik.uni-hamburg.de Universität Hamburg Fakultät für Mathematik, Informatik und Naturwissenschaften Technische Aspekte Multimodaler Systeme 05. July 2013 J. Zhang 1 Task-level

More information

c 2014 Gregory Paul Meyer

c 2014 Gregory Paul Meyer c 2014 Gregory Paul Meyer 3D FACE MODELING WITH A CONSUMER DEPTH CAMERA BY GREGORY PAUL MEYER THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Science in Electrical

More information

Srikumar Ramalingam. Review. 3D Reconstruction. Pose Estimation Revisited. School of Computing University of Utah

Srikumar Ramalingam. Review. 3D Reconstruction. Pose Estimation Revisited. School of Computing University of Utah School of Computing University of Utah Presentation Outline 1 2 3 Forward Projection (Reminder) u v 1 KR ( I t ) X m Y m Z m 1 Backward Projection (Reminder) Q K 1 q Q K 1 u v 1 What is pose estimation?

More information

TSBK03 Screen-Space Ambient Occlusion

TSBK03 Screen-Space Ambient Occlusion TSBK03 Screen-Space Ambient Occlusion Joakim Gebart, Jimmy Liikala December 15, 2013 Contents 1 Abstract 1 2 History 2 2.1 Crysis method..................................... 2 3 Chosen method 2 3.1 Algorithm

More information

3D Scanning. Lecture courtesy of Szymon Rusinkiewicz Princeton University

3D Scanning. Lecture courtesy of Szymon Rusinkiewicz Princeton University 3D Scanning Lecture courtesy of Szymon Rusinkiewicz Princeton University Computer Graphics Pipeline 3D Scanning Shape Motion Rendering Lighting and Reflectance Human time = expensive Sensors = cheap Computer

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Last Time: Acceleration Data Structures for Ray Tracing. Schedule. Today. Shadows & Light Sources. Shadows

Last Time: Acceleration Data Structures for Ray Tracing. Schedule. Today. Shadows & Light Sources. Shadows Last Time: Acceleration Data Structures for Ray Tracing Modeling Transformations Illumination (Shading) Viewing Transformation (Perspective / Orthographic) Clipping Projection (to Screen Space) Scan Conversion

More information

Flow Estimation. Min Bai. February 8, University of Toronto. Min Bai (UofT) Flow Estimation February 8, / 47

Flow Estimation. Min Bai. February 8, University of Toronto. Min Bai (UofT) Flow Estimation February 8, / 47 Flow Estimation Min Bai University of Toronto February 8, 2016 Min Bai (UofT) Flow Estimation February 8, 2016 1 / 47 Outline Optical Flow - Continued Min Bai (UofT) Flow Estimation February 8, 2016 2

More information

Robot Mapping. SLAM Front-Ends. Cyrill Stachniss. Partial image courtesy: Edwin Olson 1

Robot Mapping. SLAM Front-Ends. Cyrill Stachniss. Partial image courtesy: Edwin Olson 1 Robot Mapping SLAM Front-Ends Cyrill Stachniss Partial image courtesy: Edwin Olson 1 Graph-Based SLAM Constraints connect the nodes through odometry and observations Robot pose Constraint 2 Graph-Based

More information

Multi-View Stereo for Static and Dynamic Scenes

Multi-View Stereo for Static and Dynamic Scenes Multi-View Stereo for Static and Dynamic Scenes Wolfgang Burgard Jan 6, 2010 Main references Yasutaka Furukawa and Jean Ponce, Accurate, Dense and Robust Multi-View Stereopsis, 2007 C.L. Zitnick, S.B.

More information

Dynamic Rendering of Remote Indoor Environments Using Real-Time Point Cloud Data

Dynamic Rendering of Remote Indoor Environments Using Real-Time Point Cloud Data Dynamic Rendering of Remote Indoor Environments Using Real-Time Point Cloud Data Kevin Lesniak Industrial and Manufacturing Engineering The Pennsylvania State University University Park, PA 16802 Email:

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Spatial Data Structures

Spatial Data Structures 15-462 Computer Graphics I Lecture 17 Spatial Data Structures Hierarchical Bounding Volumes Regular Grids Octrees BSP Trees Constructive Solid Geometry (CSG) March 28, 2002 [Angel 8.9] Frank Pfenning Carnegie

More information

Processing 3D Surface Data

Processing 3D Surface Data Processing 3D Surface Data Computer Animation and Visualisation Lecture 15 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing

More information

Exploiting Depth Camera for 3D Spatial Relationship Interpretation

Exploiting Depth Camera for 3D Spatial Relationship Interpretation Exploiting Depth Camera for 3D Spatial Relationship Interpretation Jun Ye Kien A. Hua Data Systems Group, University of Central Florida Mar 1, 2013 Jun Ye and Kien A. Hua (UCF) 3D directional spatial relationships

More information

Texture Mapping using Surface Flattening via Multi-Dimensional Scaling

Texture Mapping using Surface Flattening via Multi-Dimensional Scaling Texture Mapping using Surface Flattening via Multi-Dimensional Scaling Gil Zigelman Ron Kimmel Department of Computer Science, Technion, Haifa 32000, Israel and Nahum Kiryati Department of Electrical Engineering

More information

ENGN2911I: 3D Photography and Geometry Processing Assignment 1: 3D Photography using Planar Shadows

ENGN2911I: 3D Photography and Geometry Processing Assignment 1: 3D Photography using Planar Shadows ENGN2911I: 3D Photography and Geometry Processing Assignment 1: 3D Photography using Planar Shadows Instructor: Gabriel Taubin Assignment written by: Douglas Lanman 29 January 2009 Figure 1: 3D Photography

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Registration of Dynamic Range Images

Registration of Dynamic Range Images Registration of Dynamic Range Images Tan-Chi Ho 1,2 Jung-Hong Chuang 1 Wen-Wei Lin 2 Song-Sun Lin 2 1 Department of Computer Science National Chiao-Tung University 2 Department of Applied Mathematics National

More information

Hierarchical Volumetric Fusion of Depth Images

Hierarchical Volumetric Fusion of Depth Images Hierarchical Volumetric Fusion of Depth Images László Szirmay-Kalos, Milán Magdics Balázs Tóth, Tamás Umenhoffer Real-time color & 3D information Affordable integrated depth and color cameras Application:

More information

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images MECATRONICS - REM 2016 June 15-17, 2016 High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images Shinta Nozaki and Masashi Kimura School of Science and Engineering

More information

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES Jie Shao a, Wuming Zhang a, Yaqiao Zhu b, Aojie Shen a a State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing

More information

High-Fidelity Augmented Reality Interactions Hrvoje Benko Researcher, MSR Redmond

High-Fidelity Augmented Reality Interactions Hrvoje Benko Researcher, MSR Redmond High-Fidelity Augmented Reality Interactions Hrvoje Benko Researcher, MSR Redmond New generation of interfaces Instead of interacting through indirect input devices (mice and keyboard), the user is interacting

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Volumetric Scene Reconstruction from Multiple Views

Volumetric Scene Reconstruction from Multiple Views Volumetric Scene Reconstruction from Multiple Views Chuck Dyer University of Wisconsin dyer@cs cs.wisc.edu www.cs cs.wisc.edu/~dyer Image-Based Scene Reconstruction Goal Automatic construction of photo-realistic

More information

Colored Point Cloud Registration Revisited Supplementary Material

Colored Point Cloud Registration Revisited Supplementary Material Colored Point Cloud Registration Revisited Supplementary Material Jaesik Park Qian-Yi Zhou Vladlen Koltun Intel Labs A. RGB-D Image Alignment Section introduced a joint photometric and geometric objective

More information

A Systems View of Large- Scale 3D Reconstruction

A Systems View of Large- Scale 3D Reconstruction Lecture 23: A Systems View of Large- Scale 3D Reconstruction Visual Computing Systems Goals and motivation Construct a detailed 3D model of the world from unstructured photographs (e.g., Flickr, Facebook)

More information

Virtualized Reality Using Depth Camera Point Clouds

Virtualized Reality Using Depth Camera Point Clouds Virtualized Reality Using Depth Camera Point Clouds Jordan Cazamias Stanford University jaycaz@stanford.edu Abhilash Sunder Raj Stanford University abhisr@stanford.edu Abstract We explored various ways

More information

3D Reconstruction with Tango. Ivan Dryanovski, Google Inc.

3D Reconstruction with Tango. Ivan Dryanovski, Google Inc. 3D Reconstruction with Tango Ivan Dryanovski, Google Inc. Contents Problem statement and motivation The Tango SDK 3D reconstruction - data structures & algorithms Applications Developer tools Problem formulation

More information