REGION DETECTION AND DEPTH LABELING ON KINECT STREAMS

Size: px
Start display at page:

Download "REGION DETECTION AND DEPTH LABELING ON KINECT STREAMS"

Transcription

1 BULETINUL INSTITUTULUI POLITEHNIC DIN IAŞI Publicat de Universitatea Tehnică Gheorghe Asachi din Iaşi Tomul LX (LXIV), Fasc. 3-4, 2014 SecŃia AUTOMATICĂ şi CALCULATOARE REGION DETECTION AND DEPTH LABELING ON KINECT STREAMS BY ALEXANDRU BUTEAN * and OANA BĂLAN University POLITEHNICA of Bucharest Faculty of Automatic Control and Computer Science Received: December 2, 2014 Accepted for publication: December 20, 2014 Abstract. Although Kinect was designed as a gaming tool, in the last few years studies have shown that this sensor can be used for real-time environmental scanning, segmentation, classifications and scene understanding. Our approach, based on using Kinect or any other similar device, gathers depth and RGB data from the sensors and processes the information in near real time. The purpose is to divide the data into distinct regions based on depth and colour and then calculate the distance for each detected area (depth labelling). To achieve performance in many real situations involving humans, compared to other existing segmentation or depth calculation solutions, right from the beginning, we considered the fact that humans are different than objects. Most of the objects are static and thus, they are less likely to change their dimensions and localization into every frame. We propose a method where regions are detected by merging 2 different segmentation methods: human detection using skeletal tracking and RANSAC algorithm as a method for object detection. Our experimental results are showing that the solution running on a mobile device (notebook) works with a humble improvement of maximum 7% compared to the RANSAC object detection method. Key words: depth labelling; 3D segmentation; human detection; Microsoft Kinect; scene understanding Mathematics Subject Classification: 00A06. * Corresponding author; alexandru@butean.com

2 98 Alexandru Butean and Oana Bălan 1. Introduction When Kinect was first introduced on the market, in 2010, it was a technological wonder and his main purpose was to serve as a complementary device connected to an XBOX 360 console to provide a new kind of gaming experience and unique interaction capabilities. After 2 years, the SDK was released and this was a crucial moment that triggered an important wave into many computer science areas like image processing (Yang et al., 2012), video flows (Abramov et al., 2013), 3D reconstruction (Ren et al., 2013; Chen et al., 2012; Yang et al., 2012), depth calculus (Andersen et al., 2012; Wang & Jia, 2012). Fig. 1 Kinect main parts. Fig. 1 shows the main parts: a depth camera, a simple RGB colour camera, an infrared sensor and a sensitive pair of microphones. The stream contains real time data at a frame rate up to 30fps at a resolution of maximum 640x480. Fig. 2 Kinect sample output. In Fig. 2 there are a few samples showing how Kinect output looks like. From left to the right: RGB camera, depth camera and infrared sensors. Our points of interests for this paper and for future research in this area are real-time segmentation (Yang et al., 2012), object detection (Li et al., 2011), 3D reconstruction (Izadi et al., 2011), indoor modelling (Shao et al., 2012) and important results on innovative depth calculating techniques (Newcombe et al., 2011; Chen et al., 2012; Kourosh & Sander, 2012). Existing methods in this area are proven to be very effective for specific purposes. Since we would like to develop a general purpose method, our approach is to merge two of the existing solutions in order to get better general results. This research area is still young there are lots of results to harvest for.

3 Bul. Inst. Polit. Iaşi, t. LX (LXIV), f. 3-4, We are proposing a depth labelling system that calculates the distances from the Kinect sensor to the detected humans and objects from the viewport. This is achieved by merging human detection methods with object detection algorithms. Every human and object is treated as a region but what makes our approach unique is the concept that relies on the fact that in a series of consecutive frames, humans positions are changing very fast and they should be treated different than static segmentation objects, thus they should get more processing power and attention. The system will be aware of the existence of humans in the scene, allowing different perspectives for scene understanding. If possible, we also seek to reduce the processing power needed for real time segmentation in order to be able to use the solution for future work in an integrated mobile assistive device. 2. Related Work The depth data stream from a Kinect camera offers the possibility to create 3D reconstruction of indoor scenes as well as using these results in application such as CAD or gamming (Izadi et al., 2011). Kinect Fusion system has many other usages: it can be seen as a low cost handled scanning - allows users to capture an object from different viewpoints and receives feedback immediately on the screen or reverse; offers the possibility to segment the desired object from the scene through direct interaction; supports geometry aware augmented reality, a 3D virtual word is overlaid and interacts with the real world representation. Using this approach, the aspects of the real-worlds physics can be simulated in application areas like gaming and robotics. A very interesting application is to provide input for real time object localization with embedded audio description (Gomez et al., 2011). An efficient assistance method that can evolve into the near future into an electronic travel aid for the visually-impaired. Many types of depth sensors such as stereo-based RGBD images, Timeof-Flight and Kinect, have the ability of capturing grayscale/colour images and corresponding per-pixel depth values of dynamic scenes simultaneously up to 30fps. Since the colour images used for stereo vision are of higher resolution and better texture than the depth sensors, it is reasonable to fuse the depth data from the depth sensors with colour cameras to produce corresponding highresolution depth maps of images from colour cameras (Wang & Jia, 2012). Another proposed method to improve the depth map given by the Kinect camera implies filling holes, refining edges and reducing noise. The method first detects the pixels with the wrong depth assigned, usually the ones near the object boundaries, then fills the holes combining the region growing method and bilateral filter (Chen et al., 2012).

4 100 Alexandru Butean and Oana Bălan 2. Method Description 2.1. Human Detection After studying existing similar solutions we realized that all of them are applying effective algorithms, but for the stages of segmentation and classification all the objects are treated with the same priority. Opposing to this, we consider using Kinect s feature to detect and perform skeletal tracking of players in games (Fig. 3). Fig. 3 Kinect Skeleton Tracking points. Fig. 4 Kinect Skeleton Tracking lines. Default skeletal tracking (Kar et al., 2010), illustrated in Fig. 4 would output centre axis lines for each human part, but when region detection is needed, central axis are not enough to establish the lines for the human body. The solution for turning stick skeleton tracking into human detection tracking involves help from the depth camera to get similar values around a specific pixel. Using this idea, we can confidently assume that a human was detected correctly and we can track him during his entire presence on the stream (MSDN, 2014). The described method works great for the first 6 humans that are inside a frame. Using only one Kinect device, hardware limitations does not offer support for using this kind of method for more. For now we are considering that on most practical cases, on a 3 meter radius is unlikely to find more humans, if there are more, they will still be detected by the object detection methods with a minor performance drawback and without being able to detect them as humans. Of course, for a more precise solution regarding the number of humans from a scene, many devices can run in parallel, each one dealing with maxim 6 humans Camera Calibration In parallel with human detection we evaluated several methods for object segmentation that apply for all other objects from the stream. Before applying the segmentation process, as can be seen in Fig. 5, it has to be considered that the RGB camera has an important offset compared to the depth camera (Abramov et al., 2013).

5 Bul. Inst. Polit. Iaşi, t. LX (LXIV), f. 3-4, Fig. 5 Color and depth offset. In order to align color and depth data coming from 2 different cameras, a calibration step was applied using an already implemented method from OpenNI toolbox with the output shown in Fig. 6 (Villaroman et al., 2011). Fig. 6 Output results after color-depth calibration with OpenNI. Due to the depth measuring principle, the depth image contains optical noise, unmatched edges and invalid pixels that sometimes lead to holes. This could affect our segmentation process and lead to a high detection error. A smoothing step is needed in order to avoid such results (Chen et al., 2012). Usually the wrong pixels are located between the edges of the depth map and the corresponding edges from the color image. This particular problem can be solved by using OpenNI toolbox region growing. It has to be applied to the depth image edge until it reaches the color image. The exact same process is necessary from the color image to the depth image. The final mask is obtained using an AND operator applied on those two results. Once the invalid pixels are detected using the mask, the next step is to fill the holes with estimated pixel values according to the valid pixels from the neighborhood. To polish the results, a bilateral filter was applied. As shown in Fig. 7, the difference is remarkable and the process reveals sharp details without noise. Fig. 7 Smoothing depth data using OpenNI toolbox.

6 102 Alexandru Butean and Oana Bălan 2.3. Object Detection Object detection method consists in segmentation of the point cloud elements acquired using an adjacency matrix. The matrix was built based on the given data, but taking into consideration the distance metric imposed by us (how close the points must be in order to compare them). The adjacency matrix allows in this way an efficient lookup of points based on their locality. For each cell on the adjacency matrix we compute an average normal given by the RANSAC algorithm (Li & Putchakayala, 2012). The RANdom SAmple Consensus (RANSAC) (Derpanis, 2010) algorithm is a general parameter estimation approach designed of outliers in the input data. This resampling technique takes the minimum data points required to estimate the underlying model parameters. The algorithm selects randomly the minimum number of points required for the solution, solves for the parameters of the model then checks if the number of data points from the set fit, of course that a predefined tolerance is needed. If the fraction of the number of inliers over the total number of data points exceeds a predefined threshold, re-estimate the model parameters the model parameters using all identified inliers and stop, otherwise repeat the previous steps for maximum N times. N is chosen high enough to ensure that the probability that at least one of the sets of random samples does not include an outlier. Considering u representing the any selected data point is an inlier, v = 1- u the probability of observing an outlier and m the minimum number of points denoted, the N iterations value is calculated as it follows: (1) To compute the average normal (plane s normal), we simply, give the adjacency cell into the RANSAC. We compare each point to each point from the neighboring adjacency cells and check if the distance between the points compared is smaller than a predefine threshold distance and if the angle formed by them is smaller than a predefined threshold angle. If this heuristic is true for the given pair of point, then we consider that those points are in the same segment. After this all the points considered to be in the same segment are merged labeled with a random color Merging Methods Object detection was the first implemented method and has a more general purpose, which allows us to detect any kind of object. The problem with this one is that when the scene gets crowded the frame rate goes down badly. After adding the human detection method the system logic changed. This one is

7 Bul. Inst. Polit. Iaşi, t. LX (LXIV), f. 3-4, a lot faster because it uses native Kinect capabilities. Unfortunately, Kinect has native functions only for humans. Our approach proposes a mix between those methods, a mix guided by these steps: duplicate the matrices for both RGB and depth streams; apply the human detection methods; remove humans pixels and depth data from the matrix that goes as an input for object detection; process the matrices again with object detection methods; merge the outputs and establish regions Depth Labelling Our conceptual idea is to merge 2 methods, but in the end the results from both human and object detection are considered as regions. For every calculated region, the systems gets the pixel with the closest depth and positions the calculated value (converted to meters) directly in a copy of the color matrix which is overlaid above the current color matrix. An overlay was necessary, otherwise editing the color data flow by altering original pixels will surely affect the detection process. 3. Results We implemented our proposed solution using Kinect SDK 1.8 and the development was made in Microsoft Visual Studio 2013 with WPF and C#. The experimental results shown in this section where obtained using a Kinect device connected to a notebook with an Intel i5 1.8 GHz processor, 4Gb of RAM, 256 Gb SSD hard drive and Intel HD graphics 4000 integrated GPU. Kinect stream has a resolution of 640x480 with a starting frame rate of 30fps. Fig. 8 Object detection results. Fig. 9 Human detection results. Fig. 10 Human detection + Object detection results. As we can see in Fig. 8 object detection methods are working smoothly on their own. From our measurements, the number of frames per second (FPS) varies between 10 and 2 depending on the number of objects in the scene. As a

8 104 Alexandru Butean and Oana Bălan future work we will try to test using a complete white room by adding objects one by one in order to establish exactly when the FPS starts to go down drastically. In the middle, Fig. 9 shows human detection results, working at 30 FPS benefiting from the extended skeleton detection. Here we can clearly see the offset between RGB image and the overlay depth. Intentionally we have not applied calibration in order to keep the native functions that allows fast and precise detection. Fig. 10 shows the final results with both methods activated. The detection was precise but the measured FPS was between 6 and 12. With an improvement of only 7% comparing to the object detection method, the results are showing that this approach is still far from real time (30 FPS). 4. Conclusions In this paper we present a mixed method for region detection and depth labelling using Microsoft Kinect sensors. Our unique approach brings into your attention an uncommon classification that considers that humans are different than normal objects, therefore we try to use different methods for segmentation. For the object segmentation we are using region growing smoothing and RANSAC algorithm. For detecting humans we are using an extended version of the basic skeleton tracking. Output data from both methods are merged into regions and labelled with distance values. It is well known that RGB-D 3D object segmentation needs a high volume processing environment. By mixing an existing segmentation method with a native detection method we wanted to be able to run the solution on a mobile device (notebook) that does not have extreme processing resources. Since the progress in this area was humble, in order to achieve better results in the future we would like to try parallel GPU processing (Asavei et al, 2010). As an important result, human detection works together with object detection. This solution allows an autonomous system to be aware of the existence of humans in the scene, information that can be an influential improvement for scene understanding. Xbox One was launched in the beginning of An interesting idea would be to test this solution with the new Kinect One that comes with FullHD cameras and highly improved SDK. Acknowledgments. The work has been funded by the Sectoral Operational Programme Human Resources Development of the Ministry of European Funds through the Financial Agreement POSDRU/159/1.5/S/ and POSDRU/159/1.5/S/

9 Bul. Inst. Polit. Iaşi, t. LX (LXIV), f. 3-4, REFERENCES * * * Microsoft Developer Network- - Method for Using Skeleton Traking. available Abramov A., Pauwels K., Papon J, Worgotter F, Dellen B, Depth-Supported Real-Time Video Segmentation with the Kinect. IEEE Workshop on Applications of Computer Vision (WACV), , Andersen M.R, Jensen T., Lisouski P., Mortensen A.K., Hansen M.K., Gregersen T., Ahrendt P., Kinect Depth Sensor Evaluation for Computer Vision Applications. Electrical and Computer Engineering Technical Report ECE-TR-6, Asavei V., Moldoveanu A., Moldoveanu F., Morar A., Egner A., GPGPU for Cheaper 3D MMO Servers. 9th WSEAS International Conference on Telecommunications and Informatics, Session Information Science and Applications, , Chen Li, Lin Hui, Shutao Li, Depth Image Enhancement for Kinect Using Region Growing and Bilateral Filter. 21st International Conference on Pattern Recognition (ICPR), , Derpanis K.G., Overview of the RANSAC Algorithm. Image Rochester NY, 4, 2-3, Gomez J.D., Mohammed S., Bologna G., Pun T., Toward 3D Scene Understanding via Audio-description Kinect-iPad Fusion for the Visually Impaired. ASSETS '11 the Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility, , Izadi S., Kim D., Hilliges O., Molyneaux D., Newcombe R., Kohli P., Shotton J., Freeman D., Davison A., Fitzgibbon A., KinectFusion: Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera. Microsoft Research Center, Kar A., Mukerjee A., Guha P., Skeletal Tracking Using Microsoft Kinect. Methodology 1, Kourosh K., Sander O., Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications. Sensors 2012, 12, , Li T., Putchakayala P., Wilson M., 3D Object Detection with Kinect Newcombe R.A., Izadi S., Hilliges O., Molyneaux D., Kim D., Davison A.J., Kohli P., Shotton J., Hodges S., Fitzgibbon A., KinectFusion: Real-Time Dense Surface Mapping and Tracking. ISMAR '11 Proceedings of the 10th IEEE International Symposium on Mixed and Augmented Reality, , Ren C.Y., Prisacariu V., Murray D., Reid I., STAR3D: Simultaneous Tracking and Reconstruction of 3D Objects Using RGB-D Data. Proc. Int. Conf. on Computer Vision, Sydney, Australia Shao T., Xu W., Zhou K., Wang J., Li D., Guo B., An Interactive Approach to Semantic Modeling of Indoor Scenes with an RGBD Camera. ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH Asia 2012, 31, 6, Article No. 136, Villaroman N., Rowe D., Swan B., Teaching Natural User Interaction Using OpenNI and the Microsoft Kinect Sensor. SIGITE '11 Proceedings of the Conference on Information Technology Education, , Wang Y., Jia Y., A Fusion Framework of Stereo Vision and Kinect for High-Quality Dense Depth Maps. ACCV'12 Proceedings of the 11th International Conference on Computer Vision, 2, , 2012.

10 106 Alexandru Butean and Oana Bălan Yang Z., Jin L., Tao D., Kinect Image Classification Using LLC, ICIMCS '12 Proceedings of the 4th International Conference on Internet Multimedia Computing and Service, 50 54, DETECłIA DE ZONE DE INTERES ŞI ETICHETAREA DISTANłELOR FOLOSIND FLUXURI DE DATE DE LA KINECT (Rezumat) Deşi Kinect a fost conceput ca un instrument pentru industria jocurilor pe consolă, în ultimii ani, studiile au arătat că acest senzor poate fi folosit pentru scanarea şi înńelegerea în timp real a mediului înconjurător, segmentarea de obiecte şi clasificări. Abordarea noastră, foloseşte datele de la camerele de adâncime şi RGB şi procesează informańiile aproape în timp real. Scopul este de a împărńi datele în regiuni distincte, bazate pe adâncime şi culoare, ca apoi calcularea distanńei să se realizeze pentru fiecare zonă detectată (etichetare cu adâncime). Ceea ce face această abordare unică este faptul că am considerat că într-o scenă oamenii sunt diferińi de obiecte. Cele mai multe dintre obiecte sunt statice, astfel e mai puńin probabil să îşi schimbe dimensiunile şi localizarea în fiecare cadru. Propunem o metodă în care regiunile de interes sunt detectate prin fuzionarea a 2 modalităńi de segmentare diferite: detectarea oamenilor folosind detecńia scheletului şi algoritmul RANSAC ca metodă de detecńie a obiectelor. Rezultatele experimentale de până acum arată ca soluńia rulează pe un dispozitiv mobil (notebook), funcńionând cu o îmbunătăńire foarte modestă de maximum 7% comparativ cu metoda RANSAC de detectare obiectelor. Metodele de detecńie şi segmentare a obiectelor 3D folosind camera de adâncime rulează de obicei pe sisteme care oferă putere mare de procesare. Cercetarea noastră avea ca scop rularea acestor metode pe dispozitive mobile de tip laptop cu consum redus de energie care nu beneficiază de putere mare de procesare. Sistemele de detecńie 3D a obiectelor bazate pe algoritmul RANSAC rulează pe configurańia noastră mobilă la valori maxime de 10 FPS, de aceea, o îmbunătăńire de doar 7% folosind metoda mixtă nu rezolvă pe deplin problema. Aceste rezultate ne încurajează însă şi ne conduc spre noi încercări de optimizare folosind metode de procesare paralelă folosind procesoarele de pe placa grafică.

Mobile Point Fusion. Real-time 3d surface reconstruction out of depth images on a mobile platform

Mobile Point Fusion. Real-time 3d surface reconstruction out of depth images on a mobile platform Mobile Point Fusion Real-time 3d surface reconstruction out of depth images on a mobile platform Aaron Wetzler Presenting: Daniel Ben-Hoda Supervisors: Prof. Ron Kimmel Gal Kamar Yaron Honen Supported

More information

A STUDY ON CLASSIFIERS ACCURACY FOR HAND POSE RECOGNITION

A STUDY ON CLASSIFIERS ACCURACY FOR HAND POSE RECOGNITION BULETINUL INSTITUTULUI POLITEHNIC DIN IAŞI Publicat de Universitatea Tehnică Gheorghe Asachi din Iaşi Tomul LIX (LXIII), Fasc. 2, 2013 SecŃia AUTOMATICĂ şi CALCULATOARE A STUDY ON CLASSIFIERS ACCURACY

More information

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images MECATRONICS - REM 2016 June 15-17, 2016 High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images Shinta Nozaki and Masashi Kimura School of Science and Engineering

More information

Memory Management Method for 3D Scanner Using GPGPU

Memory Management Method for 3D Scanner Using GPGPU GPGPU 3D 1 2 KinectFusion GPGPU 3D., GPU., GPGPU Octree. GPU,,. Memory Management Method for 3D Scanner Using GPGPU TATSUYA MATSUMOTO 1 SATORU FUJITA 2 This paper proposes an efficient memory management

More information

Object Reconstruction

Object Reconstruction B. Scholz Object Reconstruction 1 / 39 MIN-Fakultät Fachbereich Informatik Object Reconstruction Benjamin Scholz Universität Hamburg Fakultät für Mathematik, Informatik und Naturwissenschaften Fachbereich

More information

THREE PRE-PROCESSING STEPS TO INCREASE THE QUALITY OF KINECT RANGE DATA

THREE PRE-PROCESSING STEPS TO INCREASE THE QUALITY OF KINECT RANGE DATA THREE PRE-PROCESSING STEPS TO INCREASE THE QUALITY OF KINECT RANGE DATA M. Davoodianidaliki a, *, M. Saadatseresht a a Dept. of Surveying and Geomatics, Faculty of Engineering, University of Tehran, Tehran,

More information

Basic Problems Involved by Generating Tridimensional Multimedia Content

Basic Problems Involved by Generating Tridimensional Multimedia Content BULETINUL Universităţii Petrol Gaze din Ploieşti Vol. LXVII No. 3/2015 31 36 Seria Tehnică Basic Problems Involved by Generating Tridimensional Multimedia Content Cosmina-Mihaela Roșca, Gabriel Rădulescu

More information

Mesh from Depth Images Using GR 2 T

Mesh from Depth Images Using GR 2 T Mesh from Depth Images Using GR 2 T Mairead Grogan & Rozenn Dahyot School of Computer Science and Statistics Trinity College Dublin Dublin, Ireland mgrogan@tcd.ie, Rozenn.Dahyot@tcd.ie www.scss.tcd.ie/

More information

BOOLEAN FUNCTION DECOMPOSITION BASED ON FPGA BASIC CELL STRUCTURE

BOOLEAN FUNCTION DECOMPOSITION BASED ON FPGA BASIC CELL STRUCTURE BULETINUL INSTITUTULUI POLITEHNIC DIN IAŞI Publicat de Universitatea Tehnică Gheorghe Asachi din Iaşi Tomul LXI (LXV), Fasc. 1, 2015 SecŃia AUTOMATICĂ şi CALCULATOARE BOOLEAN FUNCTION DECOMPOSITION BASED

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES

FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES D. Beloborodov a, L. Mestetskiy a a Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University,

More information

Handheld scanning with ToF sensors and cameras

Handheld scanning with ToF sensors and cameras Handheld scanning with ToF sensors and cameras Enrico Cappelletto, Pietro Zanuttigh, Guido Maria Cortelazzo Dept. of Information Engineering, University of Padova enrico.cappelletto,zanuttigh,corte@dei.unipd.it

More information

AUTONOMOUS ROBOT NAVIGATION BASED ON FUZZY LOGIC AND REINFORCEMENT LEARNING

AUTONOMOUS ROBOT NAVIGATION BASED ON FUZZY LOGIC AND REINFORCEMENT LEARNING BULETINUL INSTITUTULUI POLITEHNIC DIN IAŞI Publicat de Universitatea Tehnică Gheorghe Asachi din Iaşi, Tomul LVI (LX), Fasc. 4, 2010 Secţia CONSTRUCŢII DE MAŞINI AUTONOMOUS ROBOT NAVIGATION BASED ON FUZZY

More information

Robot localization method based on visual features and their geometric relationship

Robot localization method based on visual features and their geometric relationship , pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department

More information

3D Line Segment Based Model Generation by RGB-D Camera for Camera Pose Estimation

3D Line Segment Based Model Generation by RGB-D Camera for Camera Pose Estimation 3D Line Segment Based Model Generation by RGB-D Camera for Camera Pose Estimation Yusuke Nakayama, Hideo Saito, Masayoshi Shimizu, and Nobuyasu Yamaguchi Graduate School of Science and Technology, Keio

More information

VARIATION OF INTERNAL FORCES USING ARTIFICIAL NEURONAL NETWORK

VARIATION OF INTERNAL FORCES USING ARTIFICIAL NEURONAL NETWORK BULETINUL INSTITUTULUI POLITEHNIC DIN IAŞI Publicat de Universitatea Tehnică Gheorghe Asachi din Iaşi Volumul 63 (67), Numărul 1, 2017 Secţia CONSTRUCŢII. ARHITECTURĂ VARIATION OF INTERNAL FORCES USING

More information

The Kinect Sensor. Luís Carriço FCUL 2014/15

The Kinect Sensor. Luís Carriço FCUL 2014/15 Advanced Interaction Techniques The Kinect Sensor Luís Carriço FCUL 2014/15 Sources: MS Kinect for Xbox 360 John C. Tang. Using Kinect to explore NUI, Ms Research, From Stanford CS247 Shotton et al. Real-Time

More information

Real-Time Plane Segmentation and Obstacle Detection of 3D Point Clouds for Indoor Scenes

Real-Time Plane Segmentation and Obstacle Detection of 3D Point Clouds for Indoor Scenes Real-Time Plane Segmentation and Obstacle Detection of 3D Point Clouds for Indoor Scenes Zhe Wang, Hong Liu, Yueliang Qian, and Tao Xu Key Laboratory of Intelligent Information Processing && Beijing Key

More information

AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor

AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor Takafumi Taketomi, Tomokazu Sato, and Naokazu Yokoya Graduate School of Information

More information

3D Editing System for Captured Real Scenes

3D Editing System for Captured Real Scenes 3D Editing System for Captured Real Scenes Inwoo Ha, Yong Beom Lee and James D.K. Kim Samsung Advanced Institute of Technology, Youngin, South Korea E-mail: {iw.ha, leey, jamesdk.kim}@samsung.com Tel:

More information

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller 3D Computer Vision Depth Cameras Prof. Didier Stricker Oliver Wasenmüller Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

USAGE OF MICROSOFT KINECT FOR AUGMENTED PROTOTYPING SPEED-UP

USAGE OF MICROSOFT KINECT FOR AUGMENTED PROTOTYPING SPEED-UP ACTA UNIVERSITATIS AGRICULTURAE ET SILVICULTURAE MENDELIANAE BRUNENSIS Volume LX 23 Number 2, 2012 USAGE OF MICROSOFT KINECT FOR AUGMENTED PROTOTYPING SPEED-UP J. Landa, D. Procházka Received: November

More information

3D Environment Reconstruction

3D Environment Reconstruction 3D Environment Reconstruction Using Modified Color ICP Algorithm by Fusion of a Camera and a 3D Laser Range Finder The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15,

More information

Rigid ICP registration with Kinect

Rigid ICP registration with Kinect Rigid ICP registration with Kinect Students: Yoni Choukroun, Elie Semmel Advisor: Yonathan Aflalo 1 Overview.p.3 Development of the project..p.3 Papers p.4 Project algorithm..p.6 Result of the whole body.p.7

More information

PART IV: RS & the Kinect

PART IV: RS & the Kinect Computer Vision on Rolling Shutter Cameras PART IV: RS & the Kinect Per-Erik Forssén, Erik Ringaby, Johan Hedborg Computer Vision Laboratory Dept. of Electrical Engineering Linköping University Tutorial

More information

Miniature faking. In close-up photo, the depth of field is limited.

Miniature faking. In close-up photo, the depth of field is limited. Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg

More information

Advances in 3D data processing and 3D cameras

Advances in 3D data processing and 3D cameras Advances in 3D data processing and 3D cameras Miguel Cazorla Grupo de Robótica y Visión Tridimensional Universidad de Alicante Contents Cameras and 3D images 3D data compression 3D registration 3D feature

More information

Dual Back-to-Back Kinects for 3-D Reconstruction

Dual Back-to-Back Kinects for 3-D Reconstruction Ho Chuen Kam, Kin Hong Wong and Baiwu Zhang, Dual Back-to-Back Kinects for 3-D Reconstruction, ISVC'16 12th International Symposium on Visual Computing December 12-14, 2016, Las Vegas, Nevada, USA. Dual

More information

Multiple View Depth Generation Based on 3D Scene Reconstruction Using Heterogeneous Cameras

Multiple View Depth Generation Based on 3D Scene Reconstruction Using Heterogeneous Cameras https://doi.org/0.5/issn.70-7.07.7.coimg- 07, Society for Imaging Science and Technology Multiple View Generation Based on D Scene Reconstruction Using Heterogeneous Cameras Dong-Won Shin and Yo-Sung Ho

More information

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Occlusion Detection of Real Objects using Contour Based Stereo Matching Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,1-3 Machikaneyama-cho, Toyonaka,

More information

Virtualized Reality Using Depth Camera Point Clouds

Virtualized Reality Using Depth Camera Point Clouds Virtualized Reality Using Depth Camera Point Clouds Jordan Cazamias Stanford University jaycaz@stanford.edu Abhilash Sunder Raj Stanford University abhisr@stanford.edu Abstract We explored various ways

More information

3D object recognition used by team robotto

3D object recognition used by team robotto 3D object recognition used by team robotto Workshop Juliane Hoebel February 1, 2016 Faculty of Computer Science, Otto-von-Guericke University Magdeburg Content 1. Introduction 2. Depth sensor 3. 3D object

More information

A Systems View of Large- Scale 3D Reconstruction

A Systems View of Large- Scale 3D Reconstruction Lecture 23: A Systems View of Large- Scale 3D Reconstruction Visual Computing Systems Goals and motivation Construct a detailed 3D model of the world from unstructured photographs (e.g., Flickr, Facebook)

More information

Kinect Shadow Detection and Classification

Kinect Shadow Detection and Classification 2013 IEEE International Conference on Computer Vision Workshops Kinect Shadow Detection and Classification Teng Deng 1, Hui Li 1, Jianfei Cai 1, Tat-Jen Cham 1, Henry Fuchs 2 1 Nanyang Technological University,

More information

Removing Moving Objects from Point Cloud Scenes

Removing Moving Objects from Point Cloud Scenes Removing Moving Objects from Point Cloud Scenes Krystof Litomisky and Bir Bhanu University of California, Riverside krystof@litomisky.com, bhanu@ee.ucr.edu Abstract. Three-dimensional simultaneous localization

More information

CHECKING THE HOMOGENEITY OF CONCRETE USING ARTIFICIAL NEURAL NETWORK

CHECKING THE HOMOGENEITY OF CONCRETE USING ARTIFICIAL NEURAL NETWORK BULETINUL INSTITUTULUI POLITEHNIC DIN IAŞI Publicat de Universitatea Tehnică Gheorghe Asachi din Iaşi Tomul LXI (LXV), Fasc., 05 Secţia CONSTRUCŢII. ARHITECTURĂ CHECKING THE HOMOGENEITY OF CONCRETE USING

More information

A consumer level 3D object scanning device using Kinect for web-based C2C business

A consumer level 3D object scanning device using Kinect for web-based C2C business A consumer level 3D object scanning device using Kinect for web-based C2C business Geoffrey Poon, Yu Yin Yeung and Wai-Man Pang Caritas Institute of Higher Education Introduction Internet shopping is popular

More information

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013 Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today: - Capturing scene depth

More information

Cellular Learning Automata-Based Color Image Segmentation using Adaptive Chains

Cellular Learning Automata-Based Color Image Segmentation using Adaptive Chains Cellular Learning Automata-Based Color Image Segmentation using Adaptive Chains Ahmad Ali Abin, Mehran Fotouhi, Shohreh Kasaei, Senior Member, IEEE Sharif University of Technology, Tehran, Iran abin@ce.sharif.edu,

More information

Task analysis based on observing hands and objects by vision

Task analysis based on observing hands and objects by vision Task analysis based on observing hands and objects by vision Yoshihiro SATO Keni Bernardin Hiroshi KIMURA Katsushi IKEUCHI Univ. of Electro-Communications Univ. of Karlsruhe Univ. of Tokyo Abstract In

More information

A Summary of Projective Geometry

A Summary of Projective Geometry A Summary of Projective Geometry Copyright 22 Acuity Technologies Inc. In the last years a unified approach to creating D models from multiple images has been developed by Beardsley[],Hartley[4,5,9],Torr[,6]

More information

Generating 3D Colored Face Model Using a Kinect Camera

Generating 3D Colored Face Model Using a Kinect Camera Generating 3D Colored Face Model Using a Kinect Camera Submitted by: Ori Ziskind, Rotem Mordoch, Nadine Toledano Advisors: Matan Sela, Yaron Honen Geometric Image Processing Laboratory, CS, Technion March,

More information

Shape Preserving RGB-D Depth Map Restoration

Shape Preserving RGB-D Depth Map Restoration Shape Preserving RGB-D Depth Map Restoration Wei Liu 1, Haoyang Xue 1, Yun Gu 1, Qiang Wu 2, Jie Yang 1, and Nikola Kasabov 3 1 The Key Laboratory of Ministry of Education for System Control and Information

More information

A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS

A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS A. Mahphood, H. Arefi *, School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran,

More information

MAC LEVEL BASED QUALITY OF SERVICE MANAGEMENT IN IEEE NETWORKS

MAC LEVEL BASED QUALITY OF SERVICE MANAGEMENT IN IEEE NETWORKS BULETINUL INSTITUTULUI POLITEHNIC DIN IAŞI Publicat de Universitatea Tehnică Gheorghe Asachi din Iaşi Tomul LVII (LXI), Fasc. 4, 2011 SecŃia ELECTROTEHNICĂ. ENERGETICĂ. ELECTRONICĂ MAC LEVEL BASED QUALITY

More information

Semi-Automatic Initial Registration for the iray System: A User Study

Semi-Automatic Initial Registration for the iray System: A User Study Semi-Automatic Initial Registration for the iray System: A User Study Tian Xie 1 (orcid.org/0000-0003-0204-1124), Mohammad M. Islam 1, Alan B. Lumsden 2, and Ioannis A. Kakadiaris 1 *(orcid.org/0000-0002-0591-1079)

More information

Eye Contact over Video

Eye Contact over Video Eye Contact over Video Jesper Kjeldskov jesper@cs.aau.dk Jacob H. Smedegård jhaubach @cs.aau.dk Thomas S. Nielsen primogens@gmail.com Mikael B. Skov dubois@cs.aau.dk Jeni Paay jeni@cs.aau.dk Abstract Video

More information

Incremental compact 3D maps of planar patches from RGBD points

Incremental compact 3D maps of planar patches from RGBD points Incremental compact 3D maps of planar patches from RGBD points Juan Navarro and José M. Cañas Universidad Rey Juan Carlos, Spain Abstract. The RGBD sensors have opened the door to low cost perception capabilities

More information

Generating Object Candidates from RGB-D Images and Point Clouds

Generating Object Candidates from RGB-D Images and Point Clouds Generating Object Candidates from RGB-D Images and Point Clouds Helge Wrede 11.05.2017 1 / 36 Outline Introduction Methods Overview The Data RGB-D Images Point Clouds Microsoft Kinect Generating Object

More information

Live Metric 3D Reconstruction on Mobile Phones ICCV 2013

Live Metric 3D Reconstruction on Mobile Phones ICCV 2013 Live Metric 3D Reconstruction on Mobile Phones ICCV 2013 Main Contents 1. Target & Related Work 2. Main Features of This System 3. System Overview & Workflow 4. Detail of This System 5. Experiments 6.

More information

Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery

Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery 1 Charles TOTH, 1 Dorota BRZEZINSKA, USA 2 Allison KEALY, Australia, 3 Guenther RETSCHER,

More information

Fingertips Tracking based on Gradient Vector

Fingertips Tracking based on Gradient Vector Int. J. Advance Soft Compu. Appl, Vol. 7, No. 3, November 2015 ISSN 2074-8523 Fingertips Tracking based on Gradient Vector Ahmad Yahya Dawod 1, Md Jan Nordin 1, and Junaidi Abdullah 2 1 Pattern Recognition

More information

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview Human Body Recognition and Tracking: How the Kinect Works Kinect RGB-D Camera Microsoft Kinect (Nov. 2010) Color video camera + laser-projected IR dot pattern + IR camera $120 (April 2012) Kinect 1.5 due

More information

3D Modeling of Objects Using Laser Scanning

3D Modeling of Objects Using Laser Scanning 1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models

More information

Efficient Reconstruction of Complex 3-D Scenes from Incomplete RGB-D Data

Efficient Reconstruction of Complex 3-D Scenes from Incomplete RGB-D Data Efficient Reconstruction of Complex 3-D Scenes from Incomplete RGB-D Data Sergio A. Mota-Gutierrez, Jean-Bernard Hayet, Salvador Ruiz-Correa, and Rogelio Hasimoto-Beltran Center for Research in Mathematics,

More information

Handheld Augmented Reality. Reto Lindegger

Handheld Augmented Reality. Reto Lindegger Handheld Augmented Reality Reto Lindegger lreto@ethz.ch 1 AUGMENTED REALITY 2 A Definition Three important characteristics: Combines real and virtual environment Interactive in real-time Registered in

More information

On-line and Off-line 3D Reconstruction for Crisis Management Applications

On-line and Off-line 3D Reconstruction for Crisis Management Applications On-line and Off-line 3D Reconstruction for Crisis Management Applications Geert De Cubber Royal Military Academy, Department of Mechanical Engineering (MSTA) Av. de la Renaissance 30, 1000 Brussels geert.de.cubber@rma.ac.be

More information

A Wearable Augmented Reality System Using an IrDA Device and a Passometer

A Wearable Augmented Reality System Using an IrDA Device and a Passometer A Wearable Augmented Reality System Using an IrDA Device and a Passometer Ryuhei Tenmoku a Masayuki Kanbara a and Naokazu Yokoya a a Graduate School of Information Science, Nara Institute of Science and

More information

A Virtual Dressing Room Using Kinect

A Virtual Dressing Room Using Kinect 2017 IJSRST Volume 3 Issue 3 Print ISSN: 2395-6011 Online ISSN: 2395-602X Themed Section: Science and Technology A Virtual Dressing Room Using Kinect Jagtap Prajakta Bansidhar, Bhole Sheetal Hiraman, Mate

More information

Depth Sensors Kinect V2 A. Fornaser

Depth Sensors Kinect V2 A. Fornaser Depth Sensors Kinect V2 A. Fornaser alberto.fornaser@unitn.it Vision Depth data It is not a 3D data, It is a map of distances Not a 3D, not a 2D it is a 2.5D or Perspective 3D Complete 3D - Tomography

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Indoor Object Recognition of 3D Kinect Dataset with RNNs

Indoor Object Recognition of 3D Kinect Dataset with RNNs Indoor Object Recognition of 3D Kinect Dataset with RNNs Thiraphat Charoensripongsa, Yue Chen, Brian Cheng 1. Introduction Recent work at Stanford in the area of scene understanding has involved using

More information

Growing Depth Image Superpixels for Foliage Modeling

Growing Depth Image Superpixels for Foliage Modeling z Growing Depth Image Superpixels for Foliage Modeling Daniel Morris, Saif Imran Dept. of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48824, USA Jin Chen, David M.

More information

3D Modeling from Range Images

3D Modeling from Range Images 1 3D Modeling from Range Images A Comprehensive System for 3D Modeling from Range Images Acquired from a 3D ToF Sensor Dipl.-Inf. March 22th, 2007 Sensor and Motivation 2 3D sensor PMD 1k-S time-of-flight

More information

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information Proceedings of the World Congress on Electrical Engineering and Computer Systems and Science (EECSS 2015) Barcelona, Spain July 13-14, 2015 Paper No. 335 Efficient SLAM Scheme Based ICP Matching Algorithm

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Using the Kinect as a Navigation Sensor for Mobile Robotics

Using the Kinect as a Navigation Sensor for Mobile Robotics Using the Kinect as a Navigation Sensor for Mobile Robotics ABSTRACT Ayrton Oliver Dept. of Electrical and Computer Engineering aoli009@aucklanduni.ac.nz Burkhard C. Wünsche Dept. of Computer Science burkhard@cs.auckland.ac.nz

More information

Kinect-based identification method for parts and disassembly track 1

Kinect-based identification method for parts and disassembly track 1 Acta Technica 62, No. 3B/2017, 483 496 c 2017 Institute of Thermomechanics CAS, v.v.i. Kinect-based identification method for parts and disassembly track 1 Zhang Zhijia 2,3, Wei Xin 2,3, Zhou Ziqiang 3,

More information

A New Approach For 3D Image Reconstruction From Multiple Images

A New Approach For 3D Image Reconstruction From Multiple Images International Journal of Electronics Engineering Research. ISSN 0975-6450 Volume 9, Number 4 (2017) pp. 569-574 Research India Publications http://www.ripublication.com A New Approach For 3D Image Reconstruction

More information

3D PLANE-BASED MAPS SIMPLIFICATION FOR RGB-D SLAM SYSTEMS

3D PLANE-BASED MAPS SIMPLIFICATION FOR RGB-D SLAM SYSTEMS 3D PLANE-BASED MAPS SIMPLIFICATION FOR RGB-D SLAM SYSTEMS 1,2 Hakim ELCHAOUI ELGHOR, 1 David ROUSSEL, 1 Fakhreddine ABABSA and 2 El-Houssine BOUYAKHF 1 IBISC Lab, Evry Val d'essonne University, Evry, France

More information

Implementation of 3D Object Reconstruction Using Multiple Kinect Cameras

Implementation of 3D Object Reconstruction Using Multiple Kinect Cameras Implementation of 3D Object Reconstruction Using Multiple Kinect Cameras Dong-Won Shin and Yo-Sung Ho; Gwangju Institute of Science of Technology (GIST); Gwangju, Republic of Korea Abstract Three-dimensional

More information

Registration of Dynamic Range Images

Registration of Dynamic Range Images Registration of Dynamic Range Images Tan-Chi Ho 1,2 Jung-Hong Chuang 1 Wen-Wei Lin 2 Song-Sun Lin 2 1 Department of Computer Science National Chiao-Tung University 2 Department of Applied Mathematics National

More information

Real-Time Video-Based Rendering from Multiple Cameras

Real-Time Video-Based Rendering from Multiple Cameras Real-Time Video-Based Rendering from Multiple Cameras Vincent Nozick Hideo Saito Graduate School of Science and Technology, Keio University, Japan E-mail: {nozick,saito}@ozawa.ics.keio.ac.jp Abstract In

More information

IMAGE PROCESSING AND IMAGE REGISTRATION ON SPIRAL ARCHITECTURE WITH salib

IMAGE PROCESSING AND IMAGE REGISTRATION ON SPIRAL ARCHITECTURE WITH salib IMAGE PROCESSING AND IMAGE REGISTRATION ON SPIRAL ARCHITECTURE WITH salib Stefan Bobe 1 and Gerald Schaefer 2,* 1 University of Applied Sciences, Bielefeld, Germany. 2 School of Computing and Informatics,

More information

10/03/11. Model Fitting. Computer Vision CS 143, Brown. James Hays. Slides from Silvio Savarese, Svetlana Lazebnik, and Derek Hoiem

10/03/11. Model Fitting. Computer Vision CS 143, Brown. James Hays. Slides from Silvio Savarese, Svetlana Lazebnik, and Derek Hoiem 10/03/11 Model Fitting Computer Vision CS 143, Brown James Hays Slides from Silvio Savarese, Svetlana Lazebnik, and Derek Hoiem Fitting: find the parameters of a model that best fit the data Alignment:

More information

OUTDOOR AND INDOOR NAVIGATION WITH MICROSOFT KINECT

OUTDOOR AND INDOOR NAVIGATION WITH MICROSOFT KINECT DICA-Dept. of Civil and Environmental Engineering Geodesy and Geomatics Section OUTDOOR AND INDOOR NAVIGATION WITH MICROSOFT KINECT Diana Pagliari Livio Pinto OUTLINE 2 The Microsoft Kinect sensor The

More information

ABOUT MANUFACTURING PROCESSES CAPABILITY ANALYSIS

ABOUT MANUFACTURING PROCESSES CAPABILITY ANALYSIS BULETINUL INSTITUTULUI POLITEHNIC DIN IAŞI Publicat de Universitatea Tehnică Gheorghe Asachi din Iaşi Tomul LIX (LXIII), Fasc. 4, 013 Secţia CONSTRUCŢII DE MAŞINI ABOUT MANUFACTURING PROCESSES CAPABILITY

More information

Processing 3D Surface Data

Processing 3D Surface Data Processing 3D Surface Data Computer Animation and Visualisation Lecture 17 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

A NOVEL SYSTOLIC ALGORITHM FOR 2-D DISCRETE SINE TRANSFORM

A NOVEL SYSTOLIC ALGORITHM FOR 2-D DISCRETE SINE TRANSFORM BULETINUL INSTITUTULUI POLITEHNIC DIN IAŞI Publicat de Universitatea Tehnică Gheorghe Asachi din Iaşi Tomul LIX (LXIII), Fasc. 3, 2013 Secţia ELECTROTEHNICĂ. ENERGETICĂ. ELECTRONICĂ A NOVEL SYSTOLIC ALGORITHM

More information

APPLICATION OF FLOYD-WARSHALL LABELLING TECHNIQUE: IDENTIFICATION OF CONNECTED PIXEL COMPONENTS IN BINARY IMAGE. Hyunkyung Shin and Joong Sang Shin

APPLICATION OF FLOYD-WARSHALL LABELLING TECHNIQUE: IDENTIFICATION OF CONNECTED PIXEL COMPONENTS IN BINARY IMAGE. Hyunkyung Shin and Joong Sang Shin Kangweon-Kyungki Math. Jour. 14 (2006), No. 1, pp. 47 55 APPLICATION OF FLOYD-WARSHALL LABELLING TECHNIQUE: IDENTIFICATION OF CONNECTED PIXEL COMPONENTS IN BINARY IMAGE Hyunkyung Shin and Joong Sang Shin

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Step-by-Step Model Buidling

Step-by-Step Model Buidling Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure

More information

Auto-focusing Technique in a Projector-Camera System

Auto-focusing Technique in a Projector-Camera System 2008 10th Intl. Conf. on Control, Automation, Robotics and Vision Hanoi, Vietnam, 17 20 December 2008 Auto-focusing Technique in a Projector-Camera System Lam Bui Quang, Daesik Kim and Sukhan Lee School

More information

Outline. 1 Why we re interested in Real-Time tracking and mapping. 3 Kinect Fusion System Overview. 4 Real-time Surface Mapping

Outline. 1 Why we re interested in Real-Time tracking and mapping. 3 Kinect Fusion System Overview. 4 Real-time Surface Mapping Outline CSE 576 KinectFusion: Real-Time Dense Surface Mapping and Tracking PhD. work from Imperial College, London Microsoft Research, Cambridge May 6, 2013 1 Why we re interested in Real-Time tracking

More information

APPLICATIONS OF MICROSOFT EXCEL - SOLVER FOR HORIZONTAL AND LEVELLING NETWORKS ADJUSTMENT

APPLICATIONS OF MICROSOFT EXCEL - SOLVER FOR HORIZONTAL AND LEVELLING NETWORKS ADJUSTMENT BULETINUL INSTITUTULUI POLITEHNIC DIN IAŞI Publicat de Universitatea Tehnică Gheorghe Asachi din Iaşi Volumul 63 (67), Numărul 1-2, 2017 Secţia HIDROTEHNICĂ APPLICATIONS OF MICROSOFT EXCEL - SOLVER FOR

More information

Visualization of Temperature Change using RGB-D Camera and Thermal Camera

Visualization of Temperature Change using RGB-D Camera and Thermal Camera 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 Visualization of Temperature

More information

SPURRED by the ready availability of depth sensors and

SPURRED by the ready availability of depth sensors and IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED DECEMBER, 2015 1 Hierarchical Hashing for Efficient Integration of Depth Images Olaf Kähler, Victor Prisacariu, Julien Valentin and David

More information

NEW GEOMETRIES FOR 3D LASER SENSORS WITH PROJECTION DISCRIMINATION

NEW GEOMETRIES FOR 3D LASER SENSORS WITH PROJECTION DISCRIMINATION BULETINUL INSTITUTULUI OLITEHNIC DIN IAŞI ublicat de Universitatea Tehnică Gheorghe Asachi din Iaşi Tomul LVI (LX), Fasc. 1, 2010 Secţia AUTOMATICĂ şi CALCULATOARE NEW GEOMETRIES FOR 3D LASER SENSORS WITH

More information

3D Digitization of Human Foot Based on Computer Stereo Vision Combined with KINECT Sensor Hai-Qing YANG a,*, Li HE b, Geng-Xin GUO c and Yong-Jun XU d

3D Digitization of Human Foot Based on Computer Stereo Vision Combined with KINECT Sensor Hai-Qing YANG a,*, Li HE b, Geng-Xin GUO c and Yong-Jun XU d 2017 International Conference on Mechanical Engineering and Control Automation (ICMECA 2017) ISBN: 978-1-60595-449-3 3D Digitization of Human Foot Based on Computer Stereo Vision Combined with KINECT Sensor

More information

Iterative Estimation of 3D Transformations for Object Alignment

Iterative Estimation of 3D Transformations for Object Alignment Iterative Estimation of 3D Transformations for Object Alignment Tao Wang and Anup Basu Department of Computing Science, Univ. of Alberta, Edmonton, AB T6G 2E8, Canada Abstract. An Iterative Estimation

More information

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES Jie Shao a, Wuming Zhang a, Yaqiao Zhu b, Aojie Shen a a State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing

More information

Robot Localization based on Geo-referenced Images and G raphic Methods

Robot Localization based on Geo-referenced Images and G raphic Methods Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,

More information

3D Models from Range Sensors. Gianpaolo Palma

3D Models from Range Sensors. Gianpaolo Palma 3D Models from Range Sensors Gianpaolo Palma Who Gianpaolo Palma Researcher at Visual Computing Laboratory (ISTI-CNR) Expertise: 3D scanning, Mesh Processing, Computer Graphics E-mail: gianpaolo.palma@isti.cnr.it

More information

A Real-Time RGB-D Registration and Mapping Approach by Heuristically Switching Between Photometric And Geometric Information

A Real-Time RGB-D Registration and Mapping Approach by Heuristically Switching Between Photometric And Geometric Information A Real-Time RGB-D Registration and Mapping Approach by Heuristically Switching Between Photometric And Geometric Information The 17th International Conference on Information Fusion (Fusion 2014) Khalid

More information

Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration

Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration , pp.33-41 http://dx.doi.org/10.14257/astl.2014.52.07 Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration Wang Wei, Zhao Wenbin, Zhao Zhengxu School of Information

More information

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Zhiyan Zhang 1, Wei Qian 1, Lei Pan 1 & Yanjun Li 1 1 University of Shanghai for Science and Technology, China

More information

Algorithm research of 3D point cloud registration based on iterative closest point 1

Algorithm research of 3D point cloud registration based on iterative closest point 1 Acta Technica 62, No. 3B/2017, 189 196 c 2017 Institute of Thermomechanics CAS, v.v.i. Algorithm research of 3D point cloud registration based on iterative closest point 1 Qian Gao 2, Yujian Wang 2,3,

More information

An Algorithm for Seamless Image Stitching and Its Application

An Algorithm for Seamless Image Stitching and Its Application An Algorithm for Seamless Image Stitching and Its Application Jing Xing, Zhenjiang Miao, and Jing Chen Institute of Information Science, Beijing JiaoTong University, Beijing 100044, P.R. China Abstract.

More information

Reconstruction of 3D Models Consisting of Line Segments

Reconstruction of 3D Models Consisting of Line Segments Reconstruction of 3D Models Consisting of Line Segments Naoto Ienaga (B) and Hideo Saito Department of Information and Computer Science, Keio University, Yokohama, Japan {ienaga,saito}@hvrl.ics.keio.ac.jp

More information