3D map reconstruction with sensor Kinect

Size: px
Start display at page:

Download "3D map reconstruction with sensor Kinect"

Transcription

1 3D map reconstruction with sensor Kinect Searching for solution applicable to small mobile robots Peter Beňo, František Duchoň, Michal Tölgyessy, Peter Hubinský, Martin Kajan Institute of Robotics and Cybernetics Slovak University of Technology Bratislava, Slovakia Abstract Although it is relatively easy to equip mobile robot with Kinect sensor, implementation of mapping algorithm can be non trivial. In this article, we describe sensor device, robot map formats and two main approaches to 3D reconstruction, problem closely connected to SLAM mapping problem with Kinect sensor. We experimentally evaluate reference implementations of both algorithms, compare their results and discuss published and possible improvements of both approaches. Keywords Kinect; RGBD-6D-SLAM, Kinect Fusion, mapping; SLAM; I. INTRODUCTION Simultaneous localization and mapping (SLAM) is one of most important problems in today's mobile robotics. This problem consists of problems of localization (where the robot is?), mapping (what is the environment?) and also navigation in generated map [27]. Researchers were trying to solve this problem by using several types of sensors - laser scanners [15], ultrasound [17], radars [16], monocular cameras [14] or stereovision cameras [18]. There are various algorithms for 2D and 3D map reconstruction and localization of mobile robot based on data from these sensors. With introduction of Microsoft Kinect, new type of cheap sensors appeared on the market. These sensors are commonly referred as RGB-D cameras, because their output consists of standard RGB image from camera, with additional depth channel for each pixel. Output data from this type of sensor are non-uniform 3D point clouds depth maps. We have seen significant amount of research in exploring various ways to use this data output. One of possible applications is performing 3D reconstruction based on the output data. In our work we investigated two most frequently used approaches to 3D reconstruction for Kinect. We also investigated maps obtained from these algorithms for mobile robotics usage. Although significant results have been published, solution, which will make your robot aware of its environment with Kinect, is still missing. II. 3D RECONSTRUCTION AND SLAM MAPPING 3D reconstruction is common term for multiple methods which aim to obtain exact 3D representations of scanned objects or scanned environments. Most common applications used with RGB-D cameras are 3D reconstruction of a static object and an environment 3D reconstruction. Researchers have achieved significant results in both of these applications [1] [2] [3]. Problem of 3D reconstruction is closely related to problem of mobile robot mapping, but it is not the same problem. 3D reconstruction problem focuses on obtaining most accurate representation of scanned environment and typical output of these algorithms are texturized meshes. Robot mapping problem is problem of obtaining the map of the environment, which is suitable for the robot, so we can localize the robot and navigate it through the environment. We are able to generate multiple types of environment maps with Kinect data - topologic maps, graph-based scan-matching maps, occupancy grid maps and the most common type - mesh maps. Fig 1. Types of robot maps In this article, we will examine two most common approaches to robot mapping with Kinect by investigating their prototype implementations. First approach, which use scan matching technique and graph representation of map will be investigated in RGBD-6D-SLAM algorithm. Second approach, which

2 typically uses Truncated signed distance function (TSDF) as volumetric environment representation will be investigated in it s reference implementation KinectFusion [3]. III. SENSOR KINECT The Kinect sensor by Microsoft was introduced to the market in November 2010 as an input device for the Xbox 360 gaming console and was a very successful product with more than 10 million devices sold by March Kinect device have revolutionized the way end-user interacts with computer because it enables developers to easily recognize people, gestures, faces and voice in their applications. The robotics community quickly discovered that the depth dense technology in the Kinect could be used for other purposes at much lower price than traditional 3D cameras as time-of-flight and stereovision cameras. robot while scanning the environment returns to a place already scanned. This method is part of ROS OpenNI framework and it uses software support from OpenSLAM initiative. Algorithm also allows optional using of Iterative closest point algorithm to refine produced mesh. Schematic overview of this algorithm is presented on figure 2. A. Kinect Hardware Kinect depth sensor emits structured dotted pattern from its infrared projector and simultaneously captures it with a CMOS camera that is fitted with IR-pass filter. IR emitter is shifted by 0.075m from the IR receiver. The integrated image processor of the Kinect uses the knowledge of the emitted light pattern, lens distortion and distance between emitter and receiver to calculate depth at each pixel in the image. This is done internally by Kinect device. To achieve rate of 30 completed scans per second, computing makes use of values from neighboring points in the pattern. This technology is called Light Coding and it has been developed by PrimeSense company. Actual depth values are distances from sensor plane rather than from sensor itself, so the Kinect device can be seen as a device that returns 3D coordinates of 3D objects. Main hardware specifications of Kinect are available in [4][5]. B. Kinect drivers Currently, there are 3 software solutions for accessing the Kinect device: a.) Libfreenect original, first, hacked driver [23], b.) OpenNI driver software developed by original Kinect reference design authors, company PrimeSense which supports auto-calibration. [24] c.) Microsoft Kinect for Windows SDK [19]. All three drivers are capable of delivering depth and RGB data, so all of them are suitable for solving mobile robot mapping with Kinect. For exact mapping, sensor has to be correctly calibrated. IV. RGBD-6D-SLAM RGBD-6D-SLAM is a method for rapid creation of colored 3D objects and interior scenes with a Kinect sensor or its equivalent. The principle of operation is based on SURF key points location of scanned image and RANSAC usage for robust determination of 3D transformations between frames. To achieve real-time processing the scanned image is compared with only a subset of the previous images with decreasing frequency. In addition, it creates a graph whose nodes correspond to camera angles and edges which represents 3D transformations. The graph is then optimized by HOG-Man in order to reduce the accumulated error in pose estimation. Optimization is particularly important when the Fig 2. Schematic diagram of RGBD-6D-SLAM algorithm Therefore, most important parts of RGBD-6D-SLAM are: SURF (Speeded up robust features) is robust image feature detector. It was first presented in 2006 and successfully used in computer image processing for object recognition and 3D reconstruction. It was partially inspired by algorithm SIFT (Scale-invariant feature transform) from year 1999 but it is many times faster [7][6]. RANSAC (Random Sample Consensus) is an iterative method for determining the parameters of a mathematical model from the captured data containing outliers (outlier is value that is much smaller or larger than most of the other values in a set of data.) and inliers (points belonging to the model). It is a non-deterministic algorithm in the sense that its output is relevant only with a certain probability while the rise in the number of iteration is likely to increase [10][8]. HOG-Man is an optimization method for graph-oriented SLAM. It provides a highly effective way of minimizing errors and works in 2D and 3D [9]. Iterative closest point is an algorithm which iteratively revises the rigid transformation needed to minimize distance between two sets of points, which can be generated from two

3 successive scans. Considering two sets of points, the scope is to find optimal transformation composed by translation t and rotation R to align the source set to the target. The problem can be formulated as minimizing the squared distance between each neighboring pairs. As any gradient descent method, the ICP is applicable when we have in advance a relatively good initial guess. Otherwise, it is likely to be trapped into a local minimum. In RGBD-6D-SLAM implementation, ICP is used to refine produced mesh by aligning consecutive scans. A. RGBDemo RGBDemo is open source software which tries to provide set of clearly organized simple tools for working with data from the Kinect sensor and for the creation of separate programs for computer image processing of RGB-D data without problems with the integration of many existing libraries. The project is divided into a library called nestk and a few demo programs to demonstrate possibilities of work with this library. RGBDemo tool is built on OpenCV and QT libraries. Some of its components also depend on the Point Cloud Library. It works with OpenNI, libreenect and Kinect for Windows SDK backend. Nestk library provides built-in support for SIFT, SURF and BRIEF: Binary Robust Independent Elementary Features. [11] We were able to compile required libraries, nestk and RGBDemo on all most used operating systems - Windows, Linux and Mac OS X and we also compiled it on two ARM-based boards Raspberry Pi and Cubieboard. Compilation on ARM platform can be nontrivial, so we are providing full compilation instructions in our Github repository.[12] B. Experiment with RGBD-6D-SLAM To verify the current 3D scene reconstruction possibilities, we decided to modify one of the demo programs in RGBDemo - rgbd-reconstructor, which uses RGBD-6DSLAM. Our implementation compared to the original program did not contain any unnecessary components for our experiments, which would slow down speed of data processing. In addition, ICP was permanently switched on and scanning parameters were modified to achieve higher quality of the reconstructed output. We strongly recommend using computer with highest CPU and memory configuration available to achieve best performance possible. The program operated reliably also on lower configurations such as a conventional notebook - in this case, however the GUI suffered considerable delay in the rendering. Interacting with program windows was very limited but the resulting mesh was comparable with meshes obtained from higher configurations. Algorithm was running correctly. In our experiment, we used photographic tripod with attached Kinect sensor for scanning to achieve greater control of the camera motion when scanning the environment. Then we rotated the sensor around its horizontal axis manually, giving the algorithm enough time to search for key points when necessary. Mesh generation was running simultaneously with scanning process. Fig 3. Scanned mesh of laboratory The final mesh suffered from high amounts of noise in data, however, after few trivial post processing adjustments (remove isolated pieces, remove duplicate faces, merge close vertices, close holes), we used the mesh to determine dimensions of the environment with satisfying precision. Standard measurement deviation of 5 meters wide room was 10cm. Fig 4. Scanned mesh of laboratory from top view Algorithm problems revealed during measurement were: a.) Produced mesh was noisy, b.) Complicated scanning of reflective or light-emitting surfaces. This problem is caused by Kinect operational principle and it can be also noticed on results obtained from Kinect Fusion algorithm. c.) Loop closure problem. Algorithm is not aware of the room context so it only sticks the data together and it is not able to realize the necessity of loop closing. d.) Algorithm has low performance in weak light conditions. In case of fast movement between frames, feature detection fails, because feature detection algorithms use RGB data. If feature detector cannot determine matches between last two frames, algorithm stops and the whole process fails. After scanning environment with desktop computer and laptop, we decided to further investigate possibilities of running the algorithm on even lower configurations. As we mentioned earlier, it is possible to successfully compile and run the Kinect SLAM implementation on ARM based computers but currently none of available boards is powerful

4 enough to process all the data correctly. Frame dropping appears in algorithm and resulting meshes contain only some randomly placed vertexes. Fig 5. Incomplete mesh - result of whole room scanning on ARM board V. KINECT FUSION SYSTEM Kinect fusion system allows a user to pick a standard Kinect camera and move rapidly within a room to reconstruct a highquality, geometrically precise 3D model of the scene. In order to achieve this, the system continually tracks the 6DOF pose of the camera and fuses live depth data from the camera into a single global 3D model in real-time. As the user explores the space, new views of the physical scene are obtained and fused into the scene model. The reconstruction therefore grows in detail after new depth measurements addition. Holes are filled, and the model becomes more complete and refined over time.[3] A core component of the Kinect Fusion algorithm is the truncated signed distance function (TSDF), a volumetric surface representation where each element stores the signed distance to the closest surface. For data integration, Kinect Fusion computes a vertex and normal map from the raw depth data. Vertex and normal maps are then used to compute the camera pose via an ICP-based registration with the predicted surface model raycast from the current TSDF. Given this pose, the depth data is integrated into the TSDF through a weighted running average operation, which over time results in a smooth surface reconstruction [20]. Figure 6 provides an overview of Kinect Fusion method in block form. Fig 6. Kinect Fusion algorithm block diagram [3] Surface measurement: A pre-processing stage where a dense vertex map and normal map pyramid are generated from the raw depth measurements obtained from the Kinect device. In this step, algorithm uses bilateral filtering to reduce noise. Surface reconstruction update: The global scene fusion processing, where given pose is determined by tracking the depth data from a new sensor frame. The surface measurement is integrated into the scene model maintained with a volumetric, truncated signed distance function (TSDF) representation. Surface prediction: Loop is closed between mapping and localization by tracking the live depth frame against the globally fused model. This is performed by raycasting the signed distance function into the estimated frame to provide a dense surface prediction against which the live depth map is aligned. Sensor pose estimation: Live sensor tracking is achieved by using a multi-scale ICP alignment between the predicted surface and current sensor measurement. GPU based implementation uses all the available data at frame-rate. [13] Core component of Kinect Fusion system is Truncated signed distance function which is volumetric object representation. This 3D matrix is able to utilize all the range data and preserve range uncertainty information. Moreover, it manages update of stored object representation incrementally and order-independent. It is also robust, has no restrictions on topologic type or shape and is able to fill holes in the reconstructed object. More information are available in [22]. Complete Fusion system pipeline is displayed on the figure 7. Fig 7. Complete Kinect Fusion pipeline [26] A. Experiment with Kinect Fusion system To evaluate results of Kinect Fusion algorithm we used reference implementation provided by Kinect for Windows SDK environment. This implementation does not allow generation of large models, because it uses fixed space with dimensions approx. 2x2m, where it creates the model. We used the same scanning configuration for obtaining the scans. Then we merged scans together with Meshlab manually. The resulting mesh did not suffer from noise as much as mesh obtained by RGBD-6D-SLAM algorithm and it is almost dimensionally perfect. However, resulting mesh did not provide surface texture because reference implementation did not include solution for texturizing final mesh. Scanning with Kinect Fusion system is more resistant to loss of feature tracking, however, loss is still possible. System

5 does not deal well with reflective or light emitting surfaces because of used sensor, but is highly resistant to moving subjects in the scene, because of averaging all the scanned data. Fig 8. Mesh created by merging the results from multiple scans with Kinect Fusion reference implementation VI. CONCLUSIONS We experimentally evaluated RGBD-6D-SLAM algorithm and described its working principle. This SLAM approach is combination of scan-matching and graph-based SLAM approaches. After publication, the algorithm was subject of significant research effort and we have seen many successful modified implementations, which improve robustness and efficiency of the algorithm or precision of obtained map. Various implementations use different feature detectors, graph optimization techniques, parallel computing and GPU processing. It is possible to run these algorithms on ARMbased micro computers and notebooks. However to achieve satisfactory results we have to use powerful computer and that might not be always possible on mobile robot. RGBD-6D- SLAM algorithm can be good starting point for development of own Kinect based SLAM system on mobile robot. Except more software improvements, integrating more sensors and creating a hybrid system should be an option, especially when the transformations computed between the scans are less reliable. Kinect Fusion system provides solution for 3D environment scanning but at the moment, it is not well suitable for usage on mobile robot, mostly because of its high demands on GPU and overall system performance. However, scanned map is almost geometrically exact with almost no noise present and algorithm can achieve this only by using Kinect sensor, even during rapid movement and complete darkness. Kinect Fusion algorithm also started significant amount of research and scientists from whole world started to extend it. Current implementations are able to successfully scan large-scale environments and correctly texturize mesh output. Implementations for different operating systems than Microsoft Windows exist. Open-source implementation called KinFu is available as part of Point Cloud Library with extensive documentation [20][21]. Possible solution for high processing-power demands would be streaming Kinect data from mobile robot to remote server for remote processing and then streaming processed lightweight representation of the map back, eventually to robot group. Output of both algorithms is mesh in *.ply format. Polygon file format is designed to store data from 3D scanners. It is not suitable for mobile robot map representation, but solutions for converting this format into more suitable Octree or occupancy grid format exist. feature RGBD-6D-SLAM Kinect Fusion Real-time scanning Limited by size of the map Limited by size of the map Offline scanning Yes (fakenect) Yes Windows/Linux/ Yes/Yes/Yes Yes/Yes/? Mac OS X Processing unit CPU (+ GPU) CPU + GPU with CUDA support ARM support Yes No Textured output Yes Yes Noise High Low System High High requirements for mobile robot Output map format Mesh Mesh Internal map representation 3D mesh and robot position graph TSFD Table 1. Comparison of Kinect 3D reconstruction methods for mobile robot ACKNOWLEDGMENT This work was supported by the projects VEGA 1/0178/13, KEGA 003STU-4/2014 and VEGA 1/0177/11. REFERENCES [1] Nicolas Burrus, Mohamed Abderrahim, Jorge Garcia, Luis Moreno, Object reconstruction and recongnition leveraging and RGB-D camera, MVA2011 IAPR Conference on Machine Vision Applications, June 13-15, 2011, Nara, JAPAN [2] Peter Henry, Michael Krainin, Evan Herbst, Xiaofeng Ren and Dieter Fox, RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments, The International Journal of Robotics Research 0(0) 1 17 [3] Shahram Izadi, David Kim, Otmar Hilliges, David Molyneaux, Richard Newcombe, Pushmeet Kohli, Jamie Shotton, Steve Hodges, Dustin Freeman, Andrew Davison, Andrew Fitzgibbon, KinectFusion: Realtime 3D Reconstruction and Interaction Using a Moving Depth Camera [4] M.R. Andersen, T. Jensen, P. Lisouski, A.K. Mortensen, M.K. Hansen, T. Gregersen and P. Ahrendt: Kinect Depth Sensor Evaluation for Computer Vision Applications, Department of Engineering, Aarhus University. Denmark. 37 pp. - Technical report ECE-TR-6 [5] Viager, M. (2011). Analysis of Kinect for Mobile Robots. Lyngby: Technical University of Denmark. [6] Herbert Bay, Andreas Ess, Tinne Tuytelaars, Luc Van Gool "SURF: Speeded Up Robust Features", Computer Vision and Image Understanding (CVIU), Vol. 110, No. 3, pp , 2008 [7] [8] M.A. Fischler and R.C. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commu- nications of the ACM, 24(6): , [9] Giorgio Grisetti, Rainer Kuemmerle, Cyrill Stachniss, Udo Frese, and Christoph Hertzberg: Hierarchical Optimization on Manifolds for Online 2D and 3D Mapping., IEEE International Conference on Robotics and Automation (ICRA), 2010

6 [10] Kinfu an open source implementation of Kinect Fusion + case study: implementing a 3D scanner with PCL, Michele Pirovano, PhD student in Computer Science at POLIMI Research fellow at UNIMI, 3D structure from visual motion 2011/2012, Project Assignment [11] [12] [13] Newcombe, Richard, Davison, Andrew J.;Izadi, Shahram; Kohli, Pushmeet; Hilliges, Otmar; Shotton, Jamie; Molyneaux, David; Hodges, Steve; Kim, David; Fitzgibbon, Andrew, KinectFusion: Realtime dense surface mapping and tracking, October 2011 [14] Eade, E. (2008). Monocular simultaneous localisation and mapping (Doctoral dissertation, PhD thesis, University of Cambridge). [15] Hähnel, D., Burgard, W., Fox, D., and Thrun, S. A highly efficient fastslam algorithm for generating cyclic maps of large-scale environments from raw laser range measurements. In IROS (Las Vegas (USA), 2003), IEEE/RSJ. [16] Callmer et al.: Radar SLAM using visual features. EURASIP Journal on Advances in Signal Processing :71. [17] Muhammad Muneeb Salem: An Economic Simultaneous Localization and Mapping System for Remote Mobile Robot Using SONAR and an Iccovative AI Algorithm, International Journal of Future Computer and Communication, Vol. 2, No. 2, 2. April 2013 [18] P. Elinas, J. Little, Stereo Vision SLAM: Near Real-time Learning of 3D Point-landmark and 2D Occupancy-grid Maps Using Particle Filter, The 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems. San Diego, California, USA, 29 Oct.-2 NOV [19] [20] Thomas Whelan, John McDonald, Michael Kaess, Maurice Fallon, Hordur Johanns- son, and John J. Leonard. Kintinuous: Spatially extended KinectFusion. In RGB-D Workshop at Robotics: Science and Systems (RSS), [21] Henry Roth, Marsette Vona: Moving Volume KinectFusion, College of Computer and Information Science Northeastern University Boston, MA, 2012 [22] CURLESS, Brian; LEVOY, Marc. A volumetric method for building complex models from range images. In: Proceedings of the 23rd annual conference on Computer graphics and interactive techniques. ACM, p [23] [24] [25] K. Židek, J. Svetlík, and E. Rigasová, Environment mapping and localization od mobile robotics, International Scientific Herald, pp [26]

Mobile Point Fusion. Real-time 3d surface reconstruction out of depth images on a mobile platform

Mobile Point Fusion. Real-time 3d surface reconstruction out of depth images on a mobile platform Mobile Point Fusion Real-time 3d surface reconstruction out of depth images on a mobile platform Aaron Wetzler Presenting: Daniel Ben-Hoda Supervisors: Prof. Ron Kimmel Gal Kamar Yaron Honen Supported

More information

Object Reconstruction

Object Reconstruction B. Scholz Object Reconstruction 1 / 39 MIN-Fakultät Fachbereich Informatik Object Reconstruction Benjamin Scholz Universität Hamburg Fakultät für Mathematik, Informatik und Naturwissenschaften Fachbereich

More information

Memory Management Method for 3D Scanner Using GPGPU

Memory Management Method for 3D Scanner Using GPGPU GPGPU 3D 1 2 KinectFusion GPGPU 3D., GPU., GPGPU Octree. GPU,,. Memory Management Method for 3D Scanner Using GPGPU TATSUYA MATSUMOTO 1 SATORU FUJITA 2 This paper proposes an efficient memory management

More information

Kinect 3D Reconstruction

Kinect 3D Reconstruction Kinect 3D Reconstruction Mahsa Mohammadkhani Computing Science University of Alberta Edmonton, Canada Mahsa2@ualberta.ca Abstract This report presents a Kinect-based framework that can reconstruct a scene

More information

3D Environment Reconstruction

3D Environment Reconstruction 3D Environment Reconstruction Using Modified Color ICP Algorithm by Fusion of a Camera and a 3D Laser Range Finder The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15,

More information

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images MECATRONICS - REM 2016 June 15-17, 2016 High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images Shinta Nozaki and Masashi Kimura School of Science and Engineering

More information

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller 3D Computer Vision Depth Cameras Prof. Didier Stricker Oliver Wasenmüller Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

A Real-Time RGB-D Registration and Mapping Approach by Heuristically Switching Between Photometric And Geometric Information

A Real-Time RGB-D Registration and Mapping Approach by Heuristically Switching Between Photometric And Geometric Information A Real-Time RGB-D Registration and Mapping Approach by Heuristically Switching Between Photometric And Geometric Information The 17th International Conference on Information Fusion (Fusion 2014) Khalid

More information

LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS

LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS 8th International DAAAM Baltic Conference "INDUSTRIAL ENGINEERING - 19-21 April 2012, Tallinn, Estonia LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS Shvarts, D. & Tamre, M. Abstract: The

More information

ABSTRACT. KinectFusion is a surface reconstruction method to allow a user to rapidly

ABSTRACT. KinectFusion is a surface reconstruction method to allow a user to rapidly ABSTRACT Title of Thesis: A REAL TIME IMPLEMENTATION OF 3D SYMMETRIC OBJECT RECONSTRUCTION Liangchen Xi, Master of Science, 2017 Thesis Directed By: Professor Yiannis Aloimonos Department of Computer Science

More information

Outline. 1 Why we re interested in Real-Time tracking and mapping. 3 Kinect Fusion System Overview. 4 Real-time Surface Mapping

Outline. 1 Why we re interested in Real-Time tracking and mapping. 3 Kinect Fusion System Overview. 4 Real-time Surface Mapping Outline CSE 576 KinectFusion: Real-Time Dense Surface Mapping and Tracking PhD. work from Imperial College, London Microsoft Research, Cambridge May 6, 2013 1 Why we re interested in Real-Time tracking

More information

Image-based ICP Algorithm for Visual Odometry Using a RGB-D Sensor in a Dynamic Environment

Image-based ICP Algorithm for Visual Odometry Using a RGB-D Sensor in a Dynamic Environment Image-based ICP Algorithm for Visual Odometry Using a RGB-D Sensor in a Dynamic Environment Deok-Hwa Kim and Jong-Hwan Kim Department of Robotics Program, KAIST 335 Gwahangno, Yuseong-gu, Daejeon 305-701,

More information

Advances in 3D data processing and 3D cameras

Advances in 3D data processing and 3D cameras Advances in 3D data processing and 3D cameras Miguel Cazorla Grupo de Robótica y Visión Tridimensional Universidad de Alicante Contents Cameras and 3D images 3D data compression 3D registration 3D feature

More information

3D Line Segment Based Model Generation by RGB-D Camera for Camera Pose Estimation

3D Line Segment Based Model Generation by RGB-D Camera for Camera Pose Estimation 3D Line Segment Based Model Generation by RGB-D Camera for Camera Pose Estimation Yusuke Nakayama, Hideo Saito, Masayoshi Shimizu, and Nobuyasu Yamaguchi Graduate School of Science and Technology, Keio

More information

Robust Online 3D Reconstruction Combining a Depth Sensor and Sparse Feature Points

Robust Online 3D Reconstruction Combining a Depth Sensor and Sparse Feature Points 2016 23rd International Conference on Pattern Recognition (ICPR) Cancún Center, Cancún, México, December 4-8, 2016 Robust Online 3D Reconstruction Combining a Depth Sensor and Sparse Feature Points Erik

More information

3D Reconstruction with Tango. Ivan Dryanovski, Google Inc.

3D Reconstruction with Tango. Ivan Dryanovski, Google Inc. 3D Reconstruction with Tango Ivan Dryanovski, Google Inc. Contents Problem statement and motivation The Tango SDK 3D reconstruction - data structures & algorithms Applications Developer tools Problem formulation

More information

3D PLANE-BASED MAPS SIMPLIFICATION FOR RGB-D SLAM SYSTEMS

3D PLANE-BASED MAPS SIMPLIFICATION FOR RGB-D SLAM SYSTEMS 3D PLANE-BASED MAPS SIMPLIFICATION FOR RGB-D SLAM SYSTEMS 1,2 Hakim ELCHAOUI ELGHOR, 1 David ROUSSEL, 1 Fakhreddine ABABSA and 2 El-Houssine BOUYAKHF 1 IBISC Lab, Evry Val d'essonne University, Evry, France

More information

Mesh from Depth Images Using GR 2 T

Mesh from Depth Images Using GR 2 T Mesh from Depth Images Using GR 2 T Mairead Grogan & Rozenn Dahyot School of Computer Science and Statistics Trinity College Dublin Dublin, Ireland mgrogan@tcd.ie, Rozenn.Dahyot@tcd.ie www.scss.tcd.ie/

More information

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013 Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today: - Capturing scene depth

More information

Index C, D, E, F I, J

Index C, D, E, F I, J Index A Ambient light, 12 B Blurring algorithm, 68 Brightness thresholding algorithm float testapp::blur, 70 kinect.update(), 69 void testapp::draw(), 70 void testapp::exit(), 70 void testapp::setup(),

More information

Generating 3D Colored Face Model Using a Kinect Camera

Generating 3D Colored Face Model Using a Kinect Camera Generating 3D Colored Face Model Using a Kinect Camera Submitted by: Ori Ziskind, Rotem Mordoch, Nadine Toledano Advisors: Matan Sela, Yaron Honen Geometric Image Processing Laboratory, CS, Technion March,

More information

Removing Moving Objects from Point Cloud Scenes

Removing Moving Objects from Point Cloud Scenes Removing Moving Objects from Point Cloud Scenes Krystof Litomisky and Bir Bhanu University of California, Riverside krystof@litomisky.com, bhanu@ee.ucr.edu Abstract. Three-dimensional simultaneous localization

More information

A Flexible Scene Representation for 3D Reconstruction Using an RGB-D Camera

A Flexible Scene Representation for 3D Reconstruction Using an RGB-D Camera 2013 IEEE International Conference on Computer Vision A Flexible Scene Representation for 3D Reconstruction Using an RGB-D Camera Diego Thomas National Institute of Informatics Chiyoda, Tokyo, Japan diego

More information

CVPR 2014 Visual SLAM Tutorial Kintinuous

CVPR 2014 Visual SLAM Tutorial Kintinuous CVPR 2014 Visual SLAM Tutorial Kintinuous kaess@cmu.edu The Robotics Institute Carnegie Mellon University Recap: KinectFusion [Newcombe et al., ISMAR 2011] RGB-D camera GPU 3D/color model RGB TSDF (volumetric

More information

USAGE OF MICROSOFT KINECT FOR AUGMENTED PROTOTYPING SPEED-UP

USAGE OF MICROSOFT KINECT FOR AUGMENTED PROTOTYPING SPEED-UP ACTA UNIVERSITATIS AGRICULTURAE ET SILVICULTURAE MENDELIANAE BRUNENSIS Volume LX 23 Number 2, 2012 USAGE OF MICROSOFT KINECT FOR AUGMENTED PROTOTYPING SPEED-UP J. Landa, D. Procházka Received: November

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

AN INDOOR SLAM METHOD BASED ON KINECT AND MULTI-FEATURE EXTENDED INFORMATION FILTER

AN INDOOR SLAM METHOD BASED ON KINECT AND MULTI-FEATURE EXTENDED INFORMATION FILTER AN INDOOR SLAM METHOD BASED ON KINECT AND MULTI-FEATURE EXTENDED INFORMATION FILTER M. Chang a, b, Z. Kang a, a School of Land Science and Technology, China University of Geosciences, 29 Xueyuan Road,

More information

A consumer level 3D object scanning device using Kinect for web-based C2C business

A consumer level 3D object scanning device using Kinect for web-based C2C business A consumer level 3D object scanning device using Kinect for web-based C2C business Geoffrey Poon, Yu Yin Yeung and Wai-Man Pang Caritas Institute of Higher Education Introduction Internet shopping is popular

More information

Monocular Tracking and Reconstruction in Non-Rigid Environments

Monocular Tracking and Reconstruction in Non-Rigid Environments Monocular Tracking and Reconstruction in Non-Rigid Environments Kick-Off Presentation, M.Sc. Thesis Supervisors: Federico Tombari, Ph.D; Benjamin Busam, M.Sc. Patrick Ruhkamp 13.01.2017 Introduction Motivation:

More information

3D Photography: Active Ranging, Structured Light, ICP

3D Photography: Active Ranging, Structured Light, ICP 3D Photography: Active Ranging, Structured Light, ICP Kalin Kolev, Marc Pollefeys Spring 2013 http://cvg.ethz.ch/teaching/2013spring/3dphoto/ Schedule (tentative) Feb 18 Feb 25 Mar 4 Mar 11 Mar 18 Mar

More information

3D Object Representations. COS 526, Fall 2016 Princeton University

3D Object Representations. COS 526, Fall 2016 Princeton University 3D Object Representations COS 526, Fall 2016 Princeton University 3D Object Representations How do we... Represent 3D objects in a computer? Acquire computer representations of 3D objects? Manipulate computer

More information

Filtering and mapping systems for underwater 3D imaging sonar

Filtering and mapping systems for underwater 3D imaging sonar Filtering and mapping systems for underwater 3D imaging sonar Tomohiro Koshikawa 1, a, Shin Kato 1,b, and Hitoshi Arisumi 1,c 1 Field Robotics Research Group, National Institute of Advanced Industrial

More information

Implementation of Odometry with EKF for Localization of Hector SLAM Method

Implementation of Odometry with EKF for Localization of Hector SLAM Method Implementation of Odometry with EKF for Localization of Hector SLAM Method Kao-Shing Hwang 1 Wei-Cheng Jiang 2 Zuo-Syuan Wang 3 Department of Electrical Engineering, National Sun Yat-sen University, Kaohsiung,

More information

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information Proceedings of the World Congress on Electrical Engineering and Computer Systems and Science (EECSS 2015) Barcelona, Spain July 13-14, 2015 Paper No. 335 Efficient SLAM Scheme Based ICP Matching Algorithm

More information

INTERACTIVE 3D ANIMATION SYSTEM BASED ON TOUCH INTERFACE AND EFFICIENT CREATION TOOLS. Anonymous ICME submission

INTERACTIVE 3D ANIMATION SYSTEM BASED ON TOUCH INTERFACE AND EFFICIENT CREATION TOOLS. Anonymous ICME submission INTERACTIVE 3D ANIMATION SYSTEM BASED ON TOUCH INTERFACE AND EFFICIENT CREATION TOOLS Anonymous ICME submission ABSTRACT Recently importance of tablet devices with touch interface increases significantly,

More information

Project Updates Short lecture Volumetric Modeling +2 papers

Project Updates Short lecture Volumetric Modeling +2 papers Volumetric Modeling Schedule (tentative) Feb 20 Feb 27 Mar 5 Introduction Lecture: Geometry, Camera Model, Calibration Lecture: Features, Tracking/Matching Mar 12 Mar 19 Mar 26 Apr 2 Apr 9 Apr 16 Apr 23

More information

Using the Kinect as a Navigation Sensor for Mobile Robotics

Using the Kinect as a Navigation Sensor for Mobile Robotics Using the Kinect as a Navigation Sensor for Mobile Robotics ABSTRACT Ayrton Oliver Dept. of Electrical and Computer Engineering aoli009@aucklanduni.ac.nz Burkhard C. Wünsche Dept. of Computer Science burkhard@cs.auckland.ac.nz

More information

Ensemble of Bayesian Filters for Loop Closure Detection

Ensemble of Bayesian Filters for Loop Closure Detection Ensemble of Bayesian Filters for Loop Closure Detection Mohammad Omar Salameh, Azizi Abdullah, Shahnorbanun Sahran Pattern Recognition Research Group Center for Artificial Intelligence Faculty of Information

More information

A New Approach For 3D Image Reconstruction From Multiple Images

A New Approach For 3D Image Reconstruction From Multiple Images International Journal of Electronics Engineering Research. ISSN 0975-6450 Volume 9, Number 4 (2017) pp. 569-574 Research India Publications http://www.ripublication.com A New Approach For 3D Image Reconstruction

More information

Tracking an RGB-D Camera Using Points and Planes

Tracking an RGB-D Camera Using Points and Planes MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Tracking an RGB-D Camera Using Points and Planes Ataer-Cansizoglu, E.; Taguchi, Y.; Ramalingam, S.; Garaas, T. TR2013-106 December 2013 Abstract

More information

Geometric Reconstruction Dense reconstruction of scene geometry

Geometric Reconstruction Dense reconstruction of scene geometry Lecture 5. Dense Reconstruction and Tracking with Real-Time Applications Part 2: Geometric Reconstruction Dr Richard Newcombe and Dr Steven Lovegrove Slide content developed from: [Newcombe, Dense Visual

More information

Toward Online 3-D Object Segmentation and Mapping

Toward Online 3-D Object Segmentation and Mapping Toward Online 3-D Object Segmentation and Mapping Evan Herbst Peter Henry Dieter Fox Abstract We build on recent fast and accurate 3-D reconstruction techniques to segment objects during scene reconstruction.

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

3D Models from Range Sensors. Gianpaolo Palma

3D Models from Range Sensors. Gianpaolo Palma 3D Models from Range Sensors Gianpaolo Palma Who Gianpaolo Palma Researcher at Visual Computing Laboratory (ISTI-CNR) Expertise: 3D scanning, Mesh Processing, Computer Graphics E-mail: gianpaolo.palma@isti.cnr.it

More information

Dual Back-to-Back Kinects for 3-D Reconstruction

Dual Back-to-Back Kinects for 3-D Reconstruction Ho Chuen Kam, Kin Hong Wong and Baiwu Zhang, Dual Back-to-Back Kinects for 3-D Reconstruction, ISVC'16 12th International Symposium on Visual Computing December 12-14, 2016, Las Vegas, Nevada, USA. Dual

More information

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms L17. OCCUPANCY MAPS NA568 Mobile Robotics: Methods & Algorithms Today s Topic Why Occupancy Maps? Bayes Binary Filters Log-odds Occupancy Maps Inverse sensor model Learning inverse sensor model ML map

More information

3D Photography: Stereo

3D Photography: Stereo 3D Photography: Stereo Marc Pollefeys, Torsten Sattler Spring 2016 http://www.cvg.ethz.ch/teaching/3dvision/ 3D Modeling with Depth Sensors Today s class Obtaining depth maps / range images unstructured

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Scanning and Printing Objects in 3D Jürgen Sturm

Scanning and Printing Objects in 3D Jürgen Sturm Scanning and Printing Objects in 3D Jürgen Sturm Metaio (formerly Technical University of Munich) My Research Areas Visual navigation for mobile robots RoboCup Kinematic Learning Articulated Objects Quadrocopters

More information

Robotics Programming Laboratory

Robotics Programming Laboratory Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car

More information

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion 007 IEEE International Conference on Robotics and Automation Roma, Italy, 0-4 April 007 FrE5. Accurate Motion Estimation and High-Precision D Reconstruction by Sensor Fusion Yunsu Bok, Youngbae Hwang,

More information

MonoRGBD-SLAM: Simultaneous Localization and Mapping Using Both Monocular and RGBD Cameras

MonoRGBD-SLAM: Simultaneous Localization and Mapping Using Both Monocular and RGBD Cameras MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com MonoRGBD-SLAM: Simultaneous Localization and Mapping Using Both Monocular and RGBD Cameras Yousif, K.; Taguchi, Y.; Ramalingam, S. TR2017-068

More information

Chaplin, Modern Times, 1936

Chaplin, Modern Times, 1936 Chaplin, Modern Times, 1936 [A Bucket of Water and a Glass Matte: Special Effects in Modern Times; bonus feature on The Criterion Collection set] Multi-view geometry problems Structure: Given projections

More information

Visualization of Temperature Change using RGB-D Camera and Thermal Camera

Visualization of Temperature Change using RGB-D Camera and Thermal Camera 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 Visualization of Temperature

More information

A Pipeline for Building 3D Models using Depth Cameras

A Pipeline for Building 3D Models using Depth Cameras A Pipeline for Building 3D Models using Depth Cameras Avishek Chatterjee Suraj Jain Venu Madhav Govindu Department of Electrical Engineering Indian Institute of Science Bengaluru 560012 INDIA {avishek

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview Human Body Recognition and Tracking: How the Kinect Works Kinect RGB-D Camera Microsoft Kinect (Nov. 2010) Color video camera + laser-projected IR dot pattern + IR camera $120 (April 2012) Kinect 1.5 due

More information

Handheld scanning with ToF sensors and cameras

Handheld scanning with ToF sensors and cameras Handheld scanning with ToF sensors and cameras Enrico Cappelletto, Pietro Zanuttigh, Guido Maria Cortelazzo Dept. of Information Engineering, University of Padova enrico.cappelletto,zanuttigh,corte@dei.unipd.it

More information

RGBD Point Cloud Alignment Using Lucas-Kanade Data Association and Automatic Error Metric Selection

RGBD Point Cloud Alignment Using Lucas-Kanade Data Association and Automatic Error Metric Selection IEEE TRANSACTIONS ON ROBOTICS, VOL. 31, NO. 6, DECEMBER 15 1 RGBD Point Cloud Alignment Using Lucas-Kanade Data Association and Automatic Error Metric Selection Brian Peasley and Stan Birchfield, Senior

More information

CSE 145/237D FINAL REPORT. 3D Reconstruction with Dynamic Fusion. Junyu Wang, Zeyangyi Wang

CSE 145/237D FINAL REPORT. 3D Reconstruction with Dynamic Fusion. Junyu Wang, Zeyangyi Wang CSE 145/237D FINAL REPORT 3D Reconstruction with Dynamic Fusion Junyu Wang, Zeyangyi Wang Contents Abstract... 2 Background... 2 Implementation... 4 Development setup... 4 Real time capturing... 5 Build

More information

Virtualized Reality Using Depth Camera Point Clouds

Virtualized Reality Using Depth Camera Point Clouds Virtualized Reality Using Depth Camera Point Clouds Jordan Cazamias Stanford University jaycaz@stanford.edu Abhilash Sunder Raj Stanford University abhisr@stanford.edu Abstract We explored various ways

More information

3D Modeling of Objects Using Laser Scanning

3D Modeling of Objects Using Laser Scanning 1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models

More information

EVALUATION OF SEQUENTIAL IMAGES FOR PHOTOGRAMMETRICALLY POINT DETERMINATION

EVALUATION OF SEQUENTIAL IMAGES FOR PHOTOGRAMMETRICALLY POINT DETERMINATION Archives of Photogrammetry, Cartography and Remote Sensing, Vol. 22, 2011, pp. 285-296 ISSN 2083-2214 EVALUATION OF SEQUENTIAL IMAGES FOR PHOTOGRAMMETRICALLY POINT DETERMINATION Michał Kowalczyk 1 1 Department

More information

Visual Perception for Robots

Visual Perception for Robots Visual Perception for Robots Sven Behnke Computer Science Institute VI Autonomous Intelligent Systems Our Cognitive Robots Complete systems for example scenarios Equipped with rich sensors Flying robot

More information

Scanning and Printing Objects in 3D

Scanning and Printing Objects in 3D Scanning and Printing Objects in 3D Dr. Jürgen Sturm metaio GmbH (formerly Technical University of Munich) My Research Areas Visual navigation for mobile robots RoboCup Kinematic Learning Articulated Objects

More information

Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching

Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching Hauke Strasdat, Cyrill Stachniss, Maren Bennewitz, and Wolfram Burgard Computer Science Institute, University of

More information

Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration

Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration , pp.33-41 http://dx.doi.org/10.14257/astl.2014.52.07 Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration Wang Wei, Zhao Wenbin, Zhao Zhengxu School of Information

More information

3D object recognition used by team robotto

3D object recognition used by team robotto 3D object recognition used by team robotto Workshop Juliane Hoebel February 1, 2016 Faculty of Computer Science, Otto-von-Guericke University Magdeburg Content 1. Introduction 2. Depth sensor 3. 3D object

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Processing 3D Surface Data

Processing 3D Surface Data Processing 3D Surface Data Computer Animation and Visualisation Lecture 12 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing

More information

Rigid ICP registration with Kinect

Rigid ICP registration with Kinect Rigid ICP registration with Kinect Students: Yoni Choukroun, Elie Semmel Advisor: Yonathan Aflalo 1 Overview.p.3 Development of the project..p.3 Papers p.4 Project algorithm..p.6 Result of the whole body.p.7

More information

Replacing Projective Data Association with Lucas-Kanade for KinectFusion

Replacing Projective Data Association with Lucas-Kanade for KinectFusion Replacing Projective Data Association with Lucas-Kanade for KinectFusion Brian Peasley and Stan Birchfield Electrical and Computer Engineering Dept. Clemson University, Clemson, SC 29634 {bpeasle,stb}@clemson.edu

More information

Volumetric 3D Mapping in Real-Time on a CPU

Volumetric 3D Mapping in Real-Time on a CPU Volumetric 3D Mapping in Real-Time on a CPU Frank Steinbrücker, Jürgen Sturm, and Daniel Cremers Abstract In this paper we propose a novel volumetric multi-resolution mapping system for RGB-D images that

More information

International Conference on Communication, Media, Technology and Design. ICCMTD May 2012 Istanbul - Turkey

International Conference on Communication, Media, Technology and Design. ICCMTD May 2012 Istanbul - Turkey VISUALIZING TIME COHERENT THREE-DIMENSIONAL CONTENT USING ONE OR MORE MICROSOFT KINECT CAMERAS Naveed Ahmed University of Sharjah Sharjah, United Arab Emirates Abstract Visualizing or digitization of the

More information

Processing 3D Surface Data

Processing 3D Surface Data Processing 3D Surface Data Computer Animation and Visualisation Lecture 17 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing

More information

3D Digitization of Human Foot Based on Computer Stereo Vision Combined with KINECT Sensor Hai-Qing YANG a,*, Li HE b, Geng-Xin GUO c and Yong-Jun XU d

3D Digitization of Human Foot Based on Computer Stereo Vision Combined with KINECT Sensor Hai-Qing YANG a,*, Li HE b, Geng-Xin GUO c and Yong-Jun XU d 2017 International Conference on Mechanical Engineering and Control Automation (ICMECA 2017) ISBN: 978-1-60595-449-3 3D Digitization of Human Foot Based on Computer Stereo Vision Combined with KINECT Sensor

More information

Semantic Mapping and Reasoning Approach for Mobile Robotics

Semantic Mapping and Reasoning Approach for Mobile Robotics Semantic Mapping and Reasoning Approach for Mobile Robotics Caner GUNEY, Serdar Bora SAYIN, Murat KENDİR, Turkey Key words: Semantic mapping, 3D mapping, probabilistic, robotic surveying, mine surveying

More information

arxiv: v1 [cs.cv] 1 Jan 2019

arxiv: v1 [cs.cv] 1 Jan 2019 Mapping Areas using Computer Vision Algorithms and Drones Bashar Alhafni Saulo Fernando Guedes Lays Cavalcante Ribeiro Juhyun Park Jeongkyu Lee University of Bridgeport. Bridgeport, CT, 06606. United States

More information

Moving Object Detection by Connected Component Labeling of Point Cloud Registration Outliers on the GPU

Moving Object Detection by Connected Component Labeling of Point Cloud Registration Outliers on the GPU Moving Object Detection by Connected Component Labeling of Point Cloud Registration Outliers on the GPU Michael Korn, Daniel Sanders and Josef Pauli Intelligent Systems Group, University of Duisburg-Essen,

More information

Optimized KinectFusion Algorithm for 3D Scanning Applications

Optimized KinectFusion Algorithm for 3D Scanning Applications Optimized KinectFusion Algorithm for 3D Scanning Applications Faraj Alhwarin, Stefan Schiffer, Alexander Ferrein and Ingrid Scholl Mobile Autonomous Systems & Cognitive Robotics Institute (MASCOR), FH

More information

Efficient Online Surface Correction for Real-time Large-Scale 3D Reconstruction

Efficient Online Surface Correction for Real-time Large-Scale 3D Reconstruction MAIER ET AL.: EFFICIENT LARGE-SCALE ONLINE SURFACE CORRECTION 1 Efficient Online Surface Correction for Real-time Large-Scale 3D Reconstruction Robert Maier maierr@in.tum.de Raphael Schaller schaller@in.tum.de

More information

IGTF 2016 Fort Worth, TX, April 11-15, 2016 Submission 149

IGTF 2016 Fort Worth, TX, April 11-15, 2016 Submission 149 IGTF 26 Fort Worth, TX, April -5, 26 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 2 Light weighted and Portable LiDAR, VLP-6 Registration Yushin Ahn (yahn@mtu.edu), Kyung In Huh (khuh@cpp.edu), Sudhagar Nagarajan

More information

Registration of Dynamic Range Images

Registration of Dynamic Range Images Registration of Dynamic Range Images Tan-Chi Ho 1,2 Jung-Hong Chuang 1 Wen-Wei Lin 2 Song-Sun Lin 2 1 Department of Computer Science National Chiao-Tung University 2 Department of Applied Mathematics National

More information

SPURRED by the ready availability of depth sensors and

SPURRED by the ready availability of depth sensors and IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED DECEMBER, 2015 1 Hierarchical Hashing for Efficient Integration of Depth Images Olaf Kähler, Victor Prisacariu, Julien Valentin and David

More information

CS 378: Autonomous Intelligent Robotics. Instructor: Jivko Sinapov

CS 378: Autonomous Intelligent Robotics. Instructor: Jivko Sinapov CS 378: Autonomous Intelligent Robotics Instructor: Jivko Sinapov http://www.cs.utexas.edu/~jsinapov/teaching/cs378/ Visual Registration and Recognition Announcements Homework 6 is out, due 4/5 4/7 Installing

More information

Multi-Volume High Resolution RGB-D Mapping with Dynamic Volume Placement. Michael Salvato

Multi-Volume High Resolution RGB-D Mapping with Dynamic Volume Placement. Michael Salvato Multi-Volume High Resolution RGB-D Mapping with Dynamic Volume Placement by Michael Salvato S.B. Massachusetts Institute of Technology (2013) Submitted to the Department of Electrical Engineering and Computer

More information

KinectFusion: Real-Time Dense Surface Mapping and Tracking

KinectFusion: Real-Time Dense Surface Mapping and Tracking KinectFusion: Real-Time Dense Surface Mapping and Tracking Gabriele Bleser Thanks to Richard Newcombe for providing the ISMAR slides Overview General: scientific papers (structure, category) KinectFusion:

More information

DeReEs: Real-Time Registration of RGBD Images Using Image-Based Feature Detection And Robust 3D Correspondence Estimation and Refinement

DeReEs: Real-Time Registration of RGBD Images Using Image-Based Feature Detection And Robust 3D Correspondence Estimation and Refinement DeReEs: Real-Time Registration of RGBD Images Using Image-Based Feature Detection And Robust 3D Correspondence Estimation and Refinement Sahand Seifi Memorial University of Newfoundland sahands[at]mun.ca

More information

Inertial-Kinect Fusion for Outdoor 3D Navigation

Inertial-Kinect Fusion for Outdoor 3D Navigation Proceedings of Australasian Conference on Robotics and Automation, 2-4 Dec 213, University of New South Wales, Sydney Australia Inertial-Kinect Fusion for Outdoor 3D Navigation Usman Qayyum and Jonghyuk

More information

Multiple View Depth Generation Based on 3D Scene Reconstruction Using Heterogeneous Cameras

Multiple View Depth Generation Based on 3D Scene Reconstruction Using Heterogeneous Cameras https://doi.org/0.5/issn.70-7.07.7.coimg- 07, Society for Imaging Science and Technology Multiple View Generation Based on D Scene Reconstruction Using Heterogeneous Cameras Dong-Won Shin and Yo-Sung Ho

More information

Simultaneous Localization and Mapping (SLAM)

Simultaneous Localization and Mapping (SLAM) Simultaneous Localization and Mapping (SLAM) RSS Lecture 16 April 8, 2013 Prof. Teller Text: Siegwart and Nourbakhsh S. 5.8 SLAM Problem Statement Inputs: No external coordinate reference Time series of

More information

Fast Sampling Plane Filtering, Polygon Construction and Merging from Depth Images

Fast Sampling Plane Filtering, Polygon Construction and Merging from Depth Images Fast Sampling Plane Filtering, Polygon Construction and Merging from Depth Images Joydeep Biswas Robotics Institute Carnegie Mellon University Pittsburgh, PA 523, USA joydeepb@ri.cmu.edu Manuela Veloso

More information

Hierarchical Sparse Coded Surface Models

Hierarchical Sparse Coded Surface Models Hierarchical Sparse Coded Surface Models Michael Ruhnke Liefeng Bo Dieter Fox Wolfram Burgard Abstract In this paper, we describe a novel approach to construct textured 3D environment models in a hierarchical

More information

Visualization of Temperature Change using RGB-D Camera and Thermal Camera

Visualization of Temperature Change using RGB-D Camera and Thermal Camera Visualization of Temperature Change using RGB-D Camera and Thermal Camera Wataru Nakagawa, Kazuki Matsumoto, Francois de Sorbier, Maki Sugimoto, Hideo Saito, Shuji Senda, Takashi Shibata, and Akihiko Iketani

More information

Monocular SLAM for a Small-Size Humanoid Robot

Monocular SLAM for a Small-Size Humanoid Robot Tamkang Journal of Science and Engineering, Vol. 14, No. 2, pp. 123 129 (2011) 123 Monocular SLAM for a Small-Size Humanoid Robot Yin-Tien Wang*, Duen-Yan Hung and Sheng-Hsien Cheng Department of Mechanical

More information

Dense 3D Reconstruction from Autonomous Quadrocopters

Dense 3D Reconstruction from Autonomous Quadrocopters Dense 3D Reconstruction from Autonomous Quadrocopters Computer Science & Mathematics TU Munich Martin Oswald, Jakob Engel, Christian Kerl, Frank Steinbrücker, Jan Stühmer & Jürgen Sturm Autonomous Quadrocopters

More information

Incremental compact 3D maps of planar patches from RGBD points

Incremental compact 3D maps of planar patches from RGBD points Incremental compact 3D maps of planar patches from RGBD points Juan Navarro and José M. Cañas Universidad Rey Juan Carlos, Spain Abstract. The RGBD sensors have opened the door to low cost perception capabilities

More information

Real-Time RGB-D Registration and Mapping in Texture-less Environments Using Ranked Order Statistics

Real-Time RGB-D Registration and Mapping in Texture-less Environments Using Ranked Order Statistics Real-Time RGB-D Registration and Mapping in Texture-less Environments Using Ranked Order Statistics Khalid Yousif 1, Alireza Bab-Hadiashar 2, Senior Member, IEEE and Reza Hoseinnezhad 3 Abstract In this

More information

Image Matching Using SIFT, SURF, BRIEF and ORB: Performance Comparison for Distorted Images

Image Matching Using SIFT, SURF, BRIEF and ORB: Performance Comparison for Distorted Images Image Matching Using SIFT, SURF, BRIEF and ORB: Performance Comparison for Distorted Images Ebrahim Karami, Siva Prasad, and Mohamed Shehata Faculty of Engineering and Applied Sciences, Memorial University,

More information

SURF: Speeded Up Robust Features. CRV Tutorial Day 2010 David Chi Chung Tam Ryerson University

SURF: Speeded Up Robust Features. CRV Tutorial Day 2010 David Chi Chung Tam Ryerson University SURF: Speeded Up Robust Features CRV Tutorial Day 2010 David Chi Chung Tam Ryerson University Goals of SURF A fast interest point detector and descriptor Maintaining comparable performance with other detectors

More information