ICP and 3D-Reconstruction

Similar documents
3D Photography: Active Ranging, Structured Light, ICP

Object Reconstruction

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

Outline. 1 Why we re interested in Real-Time tracking and mapping. 3 Kinect Fusion System Overview. 4 Real-time Surface Mapping

3D Photography: Stereo

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

5.2 Surface Registration

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

Structured light 3D reconstruction

3D object recognition used by team robotto

Surface Registration. Gianpaolo Palma

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Mobile Point Fusion. Real-time 3d surface reconstruction out of depth images on a mobile platform

Rigid ICP registration with Kinect

Advances in 3D data processing and 3D cameras

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

Processing 3D Surface Data

3D Environment Reconstruction

Processing 3D Surface Data

Stereo and Epipolar geometry

3D Object Representations. COS 526, Fall 2016 Princeton University

Chaplin, Modern Times, 1936

Algorithm research of 3D point cloud registration based on iterative closest point 1

Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO

3D Editing System for Captured Real Scenes

PART IV: RS & the Kinect

3D Modeling of Objects Using Laser Scanning

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa

Monocular Tracking and Reconstruction in Non-Rigid Environments

arxiv: v1 [cs.cv] 28 Sep 2018

Project Updates Short lecture Volumetric Modeling +2 papers

Overview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers

3D Models from Range Sensors. Gianpaolo Palma

User-assisted Segmentation and 3D Motion Tracking. Michael Fleder Sudeep Pillai Jeremy Scott

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

KinectFusion: Real-Time Dense Surface Mapping and Tracking

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava

ABSTRACT. KinectFusion is a surface reconstruction method to allow a user to rapidly

EE795: Computer Vision and Intelligent Systems

Multi-view stereo. Many slides adapted from S. Seitz

International Conference on Communication, Media, Technology and Design. ICCMTD May 2012 Istanbul - Turkey

Geometric Reconstruction Dense reconstruction of scene geometry

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Intensity Augmented ICP for Registration of Laser Scanner Point Clouds

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Point Cloud Filtering using Ray Casting by Eric Jensen 2012 The Basic Methodology

A consumer level 3D object scanning device using Kinect for web-based C2C business

Let s start with occluding contours (or interior and exterior silhouettes), and look at image-space algorithms. A very simple technique is to render

ENGN D Photography / Spring 2018 / SYLLABUS

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm

BIL Computer Vision Apr 16, 2014

Removing Moving Objects from Point Cloud Scenes

CS664 Lecture #19: Layers, RANSAC, panoramas, epipolar geometry

Multiple View Geometry

Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting

Spatial Data Structures

arxiv: v1 [cs.cv] 28 Sep 2018

Visualization of Temperature Change using RGB-D Camera and Thermal Camera

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Index C, D, E, F I, J

Miniature faking. In close-up photo, the depth of field is limited.

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA

5LSH0 Advanced Topics Video & Analysis

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

Application questions. Theoretical questions

Modeling the Virtual World

Srikumar Ramalingam. Review. 3D Reconstruction. Pose Estimation Revisited. School of Computing University of Utah

L2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods

Introduction to Robotics

c 2014 Gregory Paul Meyer

Srikumar Ramalingam. Review. 3D Reconstruction. Pose Estimation Revisited. School of Computing University of Utah

TSBK03 Screen-Space Ambient Occlusion

3D Scanning. Lecture courtesy of Szymon Rusinkiewicz Princeton University

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Last Time: Acceleration Data Structures for Ray Tracing. Schedule. Today. Shadows & Light Sources. Shadows

Flow Estimation. Min Bai. February 8, University of Toronto. Min Bai (UofT) Flow Estimation February 8, / 47

Robot Mapping. SLAM Front-Ends. Cyrill Stachniss. Partial image courtesy: Edwin Olson 1

Multi-View Stereo for Static and Dynamic Scenes

Dynamic Rendering of Remote Indoor Environments Using Real-Time Point Cloud Data

calibrated coordinates Linear transformation pixel coordinates

Spatial Data Structures

Processing 3D Surface Data

Exploiting Depth Camera for 3D Spatial Relationship Interpretation

Texture Mapping using Surface Flattening via Multi-Dimensional Scaling

ENGN2911I: 3D Photography and Geometry Processing Assignment 1: 3D Photography using Planar Shadows

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Registration of Dynamic Range Images

Hierarchical Volumetric Fusion of Depth Images

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

High-Fidelity Augmented Reality Interactions Hrvoje Benko Researcher, MSR Redmond

Flexible Calibration of a Portable Structured Light System through Surface Plane

Volumetric Scene Reconstruction from Multiple Views

Colored Point Cloud Registration Revisited Supplementary Material

A Systems View of Large- Scale 3D Reconstruction

Virtualized Reality Using Depth Camera Point Clouds

3D Reconstruction with Tango. Ivan Dryanovski, Google Inc.

Transcription:

N. Slottke, H. Linne 1 Nikolas Slottke 1 Hendrik Linne 2 {7slottke, 7linne}@informatik.uni-hamburg.de Fakultät für Mathematik, Informatik und Naturwissenschaften Technische Aspekte Multimodaler Systeme 17. December 2012

N. Slottke, H. Linne 2 Outline Introduction ICP Motivation Challenge 3D-Modelling Iterative Closest Point Algorithm Improving Conclusion Structured Light Kinect Camera

N. Slottke, H. Linne 3 Outline Introduction ICP Motivation Challenge 3D-Modelling Iterative Closest Point Algorithm Improving Conclusion Structured Light Kinect Camera

N. Slottke, H. Linne 4 Introduction - Motivation At the beginning... Why should we use three dimensional reconstruction? The answer is rather simple: Every time we need a virtual model of something, to handle it in a virtual way.

N. Slottke, H. Linne 5 Introduction - Motivation One task could be...... to create clay models at a rst step and later on a virtual CAD-model based on the real clay model.

N. Slottke, H. Linne 6 Introduction - Motivation Another task could be...... the medical usage of three dimensional models.

N. Slottke, H. Linne 7 Introduction - Motivation For robotics it is interesting, too. One can create models of the environment, where the robot is acting in. An actual project is the RACE project at the University of Hamburg. Robustness by Autonomous Competence Enhancement. The PR2 is used for this. Knowledge driven. High-level and low-level experiences. High-level: goals, task and behaviors. Low-level: sensory data.

N. Slottke, H. Linne 8 Introduction - Challenge The handling. How do we create model? Digitize the environment (or maybe only one object of interest). Take a picture with depth information. Create a point cloud of this picture.... Later on, one can create a surface for the model.

N. Slottke, H. Linne 9 Introduction - Challenge And then? The environment moves... or the robot moves... or the robot moves the environment. Again, digitize the environment. But now, we have two point clouds. Each from another perspective. We want to match them to get a whole model. Dicult because of the dierent perspectives. Some points belong together, some not. Manual matching is not discussible. One need an automatic matching.

N. Slottke, H. Linne 10 Introduction - Challenge Outline Introduction ICP Motivation Challenge 3D-Modelling Iterative Closest Point Algorithm Improving Conclusion Structured Light Kinect Camera

N. Slottke, H. Linne 11 ICP - Iterative Closest Point Algorithm Iterative Closest Point Algorithm This algorithm was rst published by Paul J. Besl and Neil D. McKay in the scientic paper A Method for Registration of 3-D Shapes in the year 1992. It depends on the prior work Closed-form solution of absolute orientation using unit quaternions by Berthold K.P. Horn from the University of Hawaii from the year 1987.

N. Slottke, H. Linne 12 ICP - Iterative Closest Point Algorithm Prior assumptions The prior point cloud, which is called a model. A model X, which is the prior point cloud, with N x points, an individual point x, where X = { x i } with 0 i < N x. The new point cloud, which is called a data set. A data set P, which is the new point cloud to be matched, with N p points, an individual point p, where P = { p i } with 0 i < N p.

N. Slottke, H. Linne 13 ICP - Iterative Closest Point Algorithm The initialized settings Before the algorithm can start, there are several settings to be done. Starting the iteration at k = 0. Setting P k = P, means here P 0 = P. Setting q k = [1, 0, 0, 0, 0, 0, 0], the rigid transformation vector. Specify the threshold τ.

N. Slottke, H. Linne 14 ICP - Iterative Closest Point Algorithm Iterative Closest Point Algorithm ICP-Algorithm 1. Compute the closest point pairing: Y k = C(P k, X ) 2. Compute the rigid registration: ( q k, d k ) = Q(P 0, Y k ) 3. Apply the registration: P k+1 = q k (P 0 ) 4. Terminate the iteration when the change in mean-square error falls below τ: d k d k+1 < τ

N. Slottke, H. Linne 15 ICP - Iterative Closest Point Algorithm Compute the closest points The distance metric d between an individual point p and X is denoted by d( p, X ) = min x X x p. The closest point in X is denoted y such that d( p, y) = d( p, X ), where y X. This is performed for each point in P. Y denotes the resulting set of closest points and C the closest point operator: Y = C(P, X ).

N. Slottke, H. Linne 16 ICP - Iterative Closest Point Algorithm Compute the rigid registration The Q-operator is applied to get q k and d k. d k is the mean square point matching error and computed by d k = 1 Np Np i=1 y ik p ik 2. q k is the transformation vector in the registration state space. q k = [scale, roll, pitch, yaw, x, y, z] As the iterative closest point algorithm proceeds, a sequence of registration vectors is generated: q 1, q 2, q 3, q 4, q 5, q 6...

N. Slottke, H. Linne 17 ICP - Improving - Simple Possibilities What costs? With a look on the algorithm, one can see that nding the pairing is the most computationally expensive step. In the worst case with a cost of O(N p N x ). Therefore it isn't useful for large datasets. To improve the algorithm there are two immediate possibilities: Don't take the square root at the euclidean distance calculations. Computing the distance only if the partial sum is still smaller than the current minimal distance. But it is still O(N p N x )!

N. Slottke, H. Linne 18 ICP - Improving - k-d Trees Using k-d trees One can use k-dimensional trees as data structure to perform the nearest neighbor searches in less than linear time. k-dimensional binary tree with given dim-dimensional points imposes a spatial decomposition which prunes much of the search space. Similar to regular binary trees with the exception that the used key changes between levels of the tree. It is used the (i mod dim) coordinate at each level i.

N. Slottke, H. Linne 19 ICP - Improving - k-d Trees Naive k-d tree A naive k-d tree with dim = 2, a = (1, 2), b = (3, 4), c = (5, 6) and d = (7, 8). The output can dier depending on the insertion order. (1) Left tree: insertion order abcd. (2) Right tree: insertion order bcad.

N. Slottke, H. Linne 20 ICP - Improving - k-d Trees Median k-d tree Median k-d tree creation. 1. If cardinality of {P i } = 1 create leaf a node. 2. Else Level i is even: Split {P i } in two subsets with a vertical line through the median x-coordinate of the points in {P i }. Let P1 be the rest of points to the left and P2 the set of points to the right. Points exactly on the line belong to P1. Level i is uneven: Split {P i } into two subsets with a horizontal line through the median y-coordinate of the points in {P i }. Let P1 be the set of points below and P2 be the points above. Points exactly on the line belong to P1.

N. Slottke, H. Linne 21 ICP - Improving - k-d Trees Median k-d tree A median k-d tree with dim = 2, a = (1, 2), b = (3, 4), c = (5, 6) and d = (7, 8).

ICP - Improving - Accelerated ICP Accelerated ICP (1) As the iterative closest point algorithm proceeds, a sequence of registration vectors is generated: q 1, q 2, q 3, q 4, q 5, q 6... This traces out a path in the registration state space toward a locally optimal shape match. Let δθ be a suciently small angular tolerance. Look back to the last three registration state vectors q k, q k 1, q k 2. Compute a linear approximation. Compute a parabolic interpolant. This gives a possible linear update, based on the zero crossing of the line and a possible parabola update, based on the extremum point of the parabola. N. Slottke, H. Linne 22

N. Slottke, H. Linne 23 ICP - Improving - Accelerated ICP Accelerated ICP (2) Consistent direction allows acceleration of the ICP algorithm.

N. Slottke, H. Linne 24 ICP - Improving - Accelerated ICP What costs again? Is there an improvement by using the k-d tree or accelerating the ICP? Yes, remember costs of O(N p N x ). The costs to construct a k-d tree is O(nlogn). Median nding can be done in O(n) complexity. Given this complexity for each tree level and that there are logn levels the total construction time is O(nlogn). The improvement by using the accelerated ICP depends on the point clouds and their poses. A nominal run of more than 50 basic ICP iterations is typically accelerated to 15 or 20 iterations.

N. Slottke, H. Linne 25 ICP - Conclusion Disadvantages The rst disadvantage is the time complexity of O(N p N x ) using the basic ICP algorithm. One have to improve the ICP algorithm when using it for real time applications. Another disadvantage appears if the two point clouds to be matched are far away from each other with a very dierent pose. By using the mean square error function the ICP algorithm terminates at the local minimum and not at the global minimum. The matching will be incorrect. One need point clouds which dier not to much.

N. Slottke, H. Linne 26 ICP - Conclusion Alternatives There are several alternatives can be used. These alternatives are image processing algorithms. Surface based methods template matching fourier method Feature based methods using spatial relations invariant descriptors perspective reconstruction

N. Slottke, H. Linne 27 ICP - Conclusion Alternatives - Using SIFT Features SIFT Features are robust against scaling, rotating and translating. Also a partial coverage is tolerated.

N. Slottke, H. Linne 28 ICP - Conclusion Alternatives - insight3d From the given images, the geometric is reconstructed and the camera positions are computed.

N. Slottke, H. Linne 29 ICP - Conclusion Alternatives - insight3d With the informations about the camera positions, one can create a three dimensional model only with three images.

N. Slottke, H. Linne 30 ICP - Conclusion Outline Introduction ICP Motivation Challenge 3D-Modelling Iterative Closest Point Algorithm Improving Conclusion Structured Light Kinect Camera

N. Slottke, H. Linne 31 3D-Modelling - Structured Light Structured Light Structured light is the process of projecting a known pattern of pixels on to a scene and extracting depth information. Often grids or horizontal bars are used Deformation of these patterns allows vision systems to extract depth information Sometimes invisble or impercetible light is used, e.g infrared light to prevent interferring with other computer vision systems Also alternating between two exactly oppsite pattern at high framerates is possible

N. Slottke, H. Linne 32 3D-Modelling - Kinect Camera Kinect Camera Kinect is a motion sensing input device. Originally developed for Microsofts XBOX360 under the Codename Project Natal Launched in November 2010 RGB camera with VGA resolution (640x480) IR depth nding camera in VGA resolution IR laser projector for depth nding (structured light) Depth ranging limits are 1.2-3.5 m

N. Slottke, H. Linne 33 3D-Modelling - Abstract enables a user holding and moving a Kinect camera to create a reconstruction of an indoor scene and interact with the scene. Tracking of the 3D Pose of the Kinect camera Reconstructing a precise 3D model of the scene in real-time Object segmentation and user interaction in front of the sensor Enabling real-time multi-touch interaction anywhere

3D-Modelling - Abstract I A) User in co ee table scene I B) Phong shaded reconstructed 3D model I C) 3D model texture mapped with Kinect RGB data and real-time particles simulated on the model I D) Multi-touch interactions performed on any reconstructed surface I E) Real-time segmentation and 3D tracking of an object N. Slottke, H. Linne 34

N. Slottke, H. Linne 35 3D-Modelling - Introduction Kinect has made depth cameras very popular and accessible to all, especially for researchers The Kinect camera generates real-time depth maps with discrete range measurements This data can be reprojected as a set of 3D points (point cloud) Kinect depth data is compelling compared to other commercially available depth cameras but also very noisy

N. Slottke, H. Linne 36 3D-Modelling - Introduction Generation of a 3D model from one viewpoint has to make strong assumptions about neighboring points This leads to a noisy and incomplete low-quality mesh To create a complete or even watertight 3D model dierent viewpoints must be captured and fused into a single representation

N. Slottke, H. Linne 37 3D-Modelling - Introduction takes live depth data from a moving Kinect camera and creates a single high-quality 3D model A user can move in any indoor environment with the camera and reconstruct the physical scene within seconds continuously tracks the 6 degrees-of-freedom (6DOF) pose of the camera New viewpoints are fused into the global representation of the scene A GPU processing pipeline allows for accurate camera tracking and surface reconstruction in real-time

3D-Modelling - Introduction I A) RGB Image of scene I B) Normals extracted from raw Kinect depth map I C) 3D mesh created from a single viewpoint/depth map I D) and E) 3D model generated from (D) and rendered with Phong shading (E) N. Slottke, H. Linne 38

N. Slottke, H. Linne 39 3D-Modelling - Introduction makes the Kinect camera a low-cost handheld scanner Also the reconstructed 3D model can be leveraged for geometry-aware augmented reality and physics-based interaction This user interaction in front of the camera is a fundamental challenge The scene itself cannot any longer be assumed to be static

N. Slottke, H. Linne 40 3D-Modelling - Related Work Reconstructing geometry from active sensors, passive cameras, etc. is a well-studied area of research in computer vision and also in robotics SLAM (Self Localization and Mapping) is one example in robotics and it is basically what does when tracking the 6DOF pose of the Kinect camera and reconstructing a map/model

N. Slottke, H. Linne 41 3D-Modelling - Related Work Challenges and Features of Interactive Rates No explicit feature detection High quality reconstruction for geometry Dynamic interaction assumed Infrastructure-less Room scale

N. Slottke, H. Linne 42 3D-Modelling - Moving the Kinect camera creates dierent viewpoints Holes in the model are lled Even small motions, caused by camera shake result in new viewpoints This creates an eect similar to image superresolution (subpixel) Texture mapping through Kinect RGB data can be applied

3D-Modelling - I A) 3D reconstrucion with surface normals I B) Texture mapped model I C) System monitors real-time changes in scene and colors large changes yellow (Segmentation) N. Slottke, H. Linne 43

N. Slottke, H. Linne 44 3D-Modelling - Low-cost Handheld Scanning A basic but compelling task for is as a low-cost handheld scanner There are some alternatives fulllling this task with passive and active cameras But the speed and quality with such low-cost hardware has not been demonstrated before Reconstructed 3D models captured with can be imported into CAD or other 3D modeling applications It is also possible to reverse the 6DOF tracking by moving an object by hand and have a static Kinect camera

N. Slottke, H. Linne 45 3D-Modelling - Object Segmentation Task: Scan a certain smaller object in the entire scene allows this by scanning the scene rst The user can then interact with the object to be scanned by moving it The object can be seperated cleanly from the background model This allows rapid segmentation of objects without a GUI To detect those Objects, moved by the user, the ICP Outliers are used

N. Slottke, H. Linne 46 3D-Modelling - Geometry-Aware Augmented Reality (AR) Beyond scanning allows realistic forms of AR 3D Virtual world is overlaid and interacts with the real-world representation The live 3D model allows virtual graphics to be precisely occluded by the real-world Also the virtual Objects can cast shadows on the real geometry

3D-Modelling - Geometry-Aware Augmented Reality (AR) I A) - D) Virtual Sphere composited onto texture mapped 3D model I E) Occlusion handling using live depth map I F) Occlusion handling using 3D reconstruction N. Slottke, H. Linne 47

N. Slottke, H. Linne 48 3D-Modelling - GPU Implementation The Approach for real-time camera tracking and surface reconstruction is based on two well-studied algorithms. ICP and 'A Volumetric Method for Building Complex Models from Range Images' They have been designed for highly parallel execution which GPUs are predestined for The GPU pipeline for consists of four main stages

N. Slottke, H. Linne 49 3D-Modelling - GPU Implementation Four main stages Depth Map Conversion Camera Tracking Volumetric Integration Raycasting Each step is executed on the GPU using the CUDA language.

N. Slottke, H. Linne 50 3D-Modelling - GPU Implementation Depth Map Conversion At time i each CUDA thread operates in parallel on a seperate pixel u = (x, y) in the incoming depth map D i (u) With the intrinsic calibration matrix K of the Kinect Infrared camera, each CUDA thread reprojects the specic depth measurement as a 3D vertex in the camera's coordinate system: v i (u) = D i (u)k 1 [u, 1] Corresponding normal vectors for each vertex are computed using neighboring reprojected points This results in a single normal map N i

N. Slottke, H. Linne 51 3D-Modelling - GPU Implementation Depth Map Conversion The 6DOF camera pose is a rigid body transform with a 3x3 rotation matrix and a 3D translation vector With this transform we can convert a vertex and a normal into global coordinates forming a global coordinate system for the reconstruction

N. Slottke, H. Linne 52 3D-Modelling - GPU Implementation Camera Tracking The camera tracking in is realised using the ICP algorithm The algorithm is used to estimate the transform which closely aligns the current oriented points with those from the previous frame of the Kinect camera This gives a 6DOF transform which can be incrementally applied to give the global camera pose T i

N. Slottke, H. Linne 53 3D-Modelling - GPU Implementation Camera Tracking Each GPU thread tests compatibility of corresponding points found by ICP to reject those who are not within an Euclidian distance treshold With the set of corresponding points the output of each ICP Iteration is a single relative transformation matrix T rel that minimizes the point-to-plane error

N. Slottke, H. Linne 54 3D-Modelling - GPU Implementation Camera Tracking One of the key novel contributions of this GPU-based approach is that ICP is performed on all measurements provided by the 640x480 Kinect depth map There is no sparse sampling of points or the need of feature extraction This also makes the camera tracking dense

N. Slottke, H. Linne 55 3D-Modelling - GPU Implementation Volumetric Integration Use of Signed Distance Functions (SDF) Integrate 3D vertices into a 3D voxel grid, specifying a relative distance to the actual surface Positive values in-front of the surface, negative behind Zero-Crossings dene surface interface Those SDF values are generated for voxels

N. Slottke, H. Linne 56 3D-Modelling - GPU Implementation Volumetric Integration Big challenge: achieve real-time rates Volumetric representation on a 3D voxel grid Not memory ecient! 512 3 volume with 32-bit voxels requires 512MB memory

N. Slottke, H. Linne 57 3D-Modelling - GPU Implementation Volumetric Integration But speed ecient! Aligned memory access from parallel threads can be performed very quickly A complete sweep on a 512 3 volume can be done in 2ms on modern GPUs (NVidia GTX470)

N. Slottke, H. Linne 58 3D-Modelling - GPU Implementation Raycasting GPU based Raycaster to generate views of the surface Basically each GPU thread walks a single ray and renders a single pixel in the output image The surface is extracted by observing zero-crossings of the SDF along the ray

N. Slottke, H. Linne 59 3D-Modelling - Conclusion is a real-time 3D reconstruction and interaction system with: A novel GPU pipeline for 3D tracking, reconstruction, segmentation, rendering and interaction Core novel uses, such as low-cost scanning and advanced Augmented Reality with physics based interaction New methods for segmenting, tracking and reconstructing dynamic users and the background simultaneously

N. Slottke, H. Linne 60 3D-Modelling - Video Video

N. Slottke, H. Linne 61 3D-Modelling - References [1] P. J. Besl and N. D. McKay, A method for registration of 3-d shapes, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 14, pp. 239256, February 1992. [2] B. K. P. Horn, Closed-form solution of absolute orientation using unit quaternions, Journal of the Optical Society of America A (JOSA A), vol. 4, p. 629, April 1987. [3] Z. Yaniv, Rigid registration: The iterative closest point algorithm, handout, The Hebrew University, School of Engineering and Computer Science, November 2012. [4] P. Stelldinger, Computer vision. Modulscript. [5] C. clay model of BMW 5 GT. Available online at http://cdn3.worldcarfans.co/2009/2/large/ bmw-5-series-gt-concept-clay-model_1.jpg, visited on November 2012. [6] D. D. Breen, Drexel geometric biomedical computing group. Available online at https: //www.cs.drexel.edu/~david/geom_biomed_comp.html, visited on December 2012. [7] N. Burrus, Kinect rgb demo v0.4.0, July 2011. Available online at http://nicolas.burrus.name/index.php/research/ KinectRgbDemoV4?from=Research.KinectRgbDemoV4, visited on December 2012. [8] L. Mach, insight3d. Available online at http://insight3d.sourceforge.net/, visited on October 2012. [9] P. C. et al., Meshlab, 2012 August. Available online at http://meshlab.sourceforge.net/, visited on December 2012. [10] S. I. et al., Kinectfusion: real-time 3d reconstruction and interaction using a moving depth camera., in UIST (J. S. Pierce, M. Agrawala, and S. R. Klemmer, eds.), pp. 559568, ACM, 2011. Available online at http://dblp.uni-trier.de/db/ conf/uist/uist2011.html#izadikhmnkshfdf11.

N. Slottke, H. Linne 62 3D-Modelling - Thank you for listening! Nikolas Slottke 1 Hendrik Linne 2 {7slottke, 7linne}@informatik.uni-hamburg.de Fakultät für Mathematik, Informatik und Naturwissenschaften Technische Aspekte Multimodaler Systeme