Real-Time Plane Segmentation and Obstacle Detection of 3D Point Clouds for Indoor Scenes
|
|
- Rolf Burns
- 5 years ago
- Views:
Transcription
1 Real-Time Plane Segmentation and Obstacle Detection of 3D Point Clouds for Indoor Scenes Zhe Wang, Hong Liu, Yueliang Qian, and Tao Xu Key Laboratory of Intelligent Information Processing && Beijing Key Laboratory of Mobile Computing and Pervasive Device Institute of Computing Technology, Chinese Academy of Sciences Beijing , China Abstract. Scene analysis is an important issue in computer vision and extracting structural information is one of the fundamental techniques. Taking advantage of depth camera, we propose a novel fast plane segmentation algorithm and use it to detect obstacles in indoor environment. The proposed algorithm has two steps: the initial segmentation and the refined segmentation. Firstly, depth image is converted into 3D point cloud and divided into voxels, which are less sensitive to noises compared with pixels. Then area-growing algorithm is used to extract the candidate planes according to the normal of each voxel. Secondly, each point that hasn t been classified to any plane is examined whether it actually belongs to a plane. The two-step strategy has been proven to be a fast segmentation method with high accuracy. The experimental results demonstrate that our method can segment planes and detect obstacles in real-time with high accuracy for indoor scenes. Keywords: plane segmentation, point cloud, obstacle detection. 1 Introduction Understanding the structural information of the surrounding environment is a principal issue for indoor service robots and wearable obstacle avoidance devices. There are many man-made planes in indoor scene. Plane detection and segmentation is the fundamental technique for understanding the structure and can be used in many important applications, such as robot navigation, object detection, scene labeling and so on. The accuracy of plane segmentation is highly related to the performance of the whole scene understanding system. Furthermore, as the basic step of scene understanding systems, plane segmentation should be processed in real-time. Recently, depth cameras, such as the Microsoft Kinect, generate a booming effect in the computer vision filed. Kinect acquires depth information at high frame rates (30Hz) and the accuracy is roughly equal to 3D laser scanners in low ranges. With depth data, the shape, size, geometric information and several other properties of the objects can be more precisely determined. As a result, A. Fusiello et al. (Eds.): ECCV 2012 Ws/Demos, Part II, LNCS 7584, pp , c Springer-Verlag Berlin Heidelberg 2012
2 Real-Time Plane Segmentation and Obstacle Detection 23 depth cameras greatly stimulate researchers enthusiasm and have been widely used in plenty of computer vision systems, such as mobile robot navigation[1], 3D reconstruction[2] and indoor navigation systems[3]. Based on single depth camera from one view, we focus on fast plane segmentation method and use it to detect obstacles in indoor scenes. Depth image grabbed by single depth camera has some noises. Furthermore, the resolution of depth image by Kinect is pixels, which has excessive amount of data for real-time processing. In this paper we present an algorithm to accurately segment all planes using depth image, and then detect the obstacles in indoor scenes. A novel two-step strategy PVP for fast Plane segmentation mainly consists of Voxel-wise initial segmentation and Pixel-wise accurate segmentation is proposed. Firstly, we convert the depth image into 3D point cloud and divide the point cloud into voxels. Then the normal of each voxel is calculated and an area-growing based algorithm is used to extract the candidate planes according to the normal of each voxel. Secondly, each point in point cloud that hasn t been classified to any plane is examined whether it actually belongs to a plane or not. Then the fragments of the same plane are merged together. The experimental results show the proposed two-step strategy PVP is a fast segmentation method with high accuracy. 2 Related Work Many plane segmentation algorithms for 3D data have been proposed in recent years. Some researchers treat depth images as 2D images and apply 2D segmentation methods on depth images directly. The only difference is that the value of each pixel in depth images is not in the range of 0 to 255. When adopting advanced segmentation algorithms like mean-shift clustering[4] or graph-based segmentation[5], this approach gives good results in some scenes. However, in other situations such as the case of close or touching objects, these methods perform badly. Some algorithms, like[6] or[7], use RANSAC or J-linkage[8] to robustly estimate the parameters of all planes in a depth image. But this kind of algorithms is usually slow and can hardly be adopted in real-time systems. Furthermore, these methods are unsuitable for large and complex scenes such as office rooms. So some researchers begin to find other more efficient methods. Holz et al.[9] use integral images to compute local surface normals of point clouds. Then the points are clustered, segmented, and classified in both normal space and spherical coordinates. The experiments show the algorithm could achieve a frame rate at 30Hz at the resolution of pixels. However other advanced tasks like object classification may need a higher resolution. Dube et al.[10] extract planes from depth images based on the Randomized Hough Transformation. They use a noise model for the sensor to solve the task of finding proper parameter metrics for the Randomized Hough Transform. This algorithm is real-time capable on the platform of a mobile robot. However it can only detect planes, and cannot accurately segments the planes.
3 24 Z. Wang et al. Fig. 1. System architecture 3 Overview of Our Approach Fig. 1 gives the architecture of our system. Basically, our system contains three stages: data preprocessing, plane segmentation and obstacle detection. First of all, in the data preprocessing stage, the depth image obtained from the Kinect is converted to the corresponding 3D point cloud. Then a voxel grid is created on the point cloud. For each voxel, a plane equation is estimated using least square estimation technique and the normal of the plane can be obtained from the coefficient of the equation. Next, in the plane segmentation stage, an areagrowing based algorithm is firstly adopted to extract candidate planes at the initial segmentation step. Then the remaining points are examined one by one to achieve accurate segmentation. For each point, we search in the 3D space within a certain radius to determine whether it belongs to a plane or not. Plane fragments are merged together at the last step of plane segmentation. In the obstacle detection stage, all the planes are removed from the point cloud. Then we use an area-growing based algorithm to extract point clusters and each point cluster is considered to be an obstacle.
4 Real-Time Plane Segmentation and Obstacle Detection 25 4 Real-Time Plane Segmentation Algorithm PVP 3D point cloud contains much structural information that can be used for plane segmentation. However, the point cloud obtained from the Kinect has a large amount of data. So algorithms for point cloud are often time consuming. In this paper, we propose a novel two-step plane segmentation algorithm PVP that firstly applies a voxel-wise initial segmentation and then a pixel-wise refinement. The voxel-wise segmentation gives a rough but extremely fast result and then the pixel-wise refinement greatly enhance the accuracy. The combination of the two steps is a great balance between speed and accuracy. 4.1 Initial Plane Segmentation Normally, plane segmentation algorithms would check each pixel of the image to determine whether this pixel can be classified to a certain plane. However, we find that in 3D point cloud, planes always consist of many flat pieces of points. These pieces have the same normal and are next to each other. Inspired by this idea, we construct voxel grid on the point cloud and let each voxel be the smallest unit to do initial plane segmentation. A voxel is a small 3D box containing several points as Fig. 2 shows. The side-length of the voxel used in our experiments is 20 cm. For each voxel, we use the least square estimation technique to estimate a plane equation that fits all the points in the voxel. Assume that the plane equation is Ax + By + Cz + 1 = 0, the least square estimation for A, B, C is A x 2 1 i xi y i xi z i B = xi y i y 2 xi i yi z i yi C xi z i yi z i z 2 (1) i zi where (x i,y i,z i ),i=1, 2, 3,..., N represents a point in the voxel. It is an obvious result in analytic geometry that vector [ ABC] T is exactly the normal of this plane. After calculating the plane equations for all the voxels, we use the normals to extract initial planes. Plane segmentation can be treated as a clustering problem. However, if we directly cluster points in the 3D point cloud, the efficiency of the algorithm would not meet the demand of a real-time system. We notice that planes in point cloud consist of small flat pieces, and the pieces of the same plane have the same normal direction. After constructing voxel grid on 3D point cloud, each voxel actually is a piece of points. If the adjacent voxels have the same normal direction, they can be considered in the same plane. So we obtain all the clusters of voxels and in each cluster the normal direction of the voxels are the same. Finally if a voxel cluster is larger than a threshold, it is determined to be a plane. The detailed algorithm is described in Algorithm 1. The number of adjacent voxels is a parameter whose value is 26 in our experiments. After these steps, we can get many voxel clusters. The cluster contains more than a certain number of voxels is considered as a candidate plane. The voxels
5 26 Z. Wang et al. Fig. 2. Construct voxel grid on 3D point cloud belong to a plane are called planar-voxel, and the remaining voxels are called nonplanar-voxel. Points in all the non-planar-voxels should be accurately examined pixel by pixel in the refined segmentation. Furthermore, the voxel-wise plane segmentation is less sensitive to noise because several noise points would not influence the normal of a voxel. 4.2 Refined Plane Segmentation After initial plane segmentation, we get several roughly segmented candidate plans. In order to obtain an accurate segmentation result, we check each point in the non-planar point cloud to determine whether it belongs to a certain existed plane. We know that if a point is on a plane, then the distance from the point to that plane is near zero. For each point, the distance between it and a candidate plane is calculated. If the distance is small, the point is considered belongs to that plane. However, comparing with every candidate plane is too global, neglecting the local structural restrictions. So we search the neighborhood of a point for candidate planes. And if there are planes near this point, then we just calculate the distance between the point and these neighbor planes to determine which plane the point belongs to. More specifically, we traverse the voxels grid and find the voxels that haven t been clustered to a plane. Then we take one of these voxels as the center and search the space surrounding the center with radius r. In order to speed up the algorithm, we just search in a cube of side r cm and r is an integral multiple of the length of the voxel s side. If there are no planar-voxels near the center, then all the points in the central voxel are considered as non-planar points. Otherwise, if there are planar-voxels near the center, then for each point in the central voxel,
6 Real-Time Plane Segmentation and Obstacle Detection 27 Algorithm 1. Initial Plane Segmentation 1: Traverse the voxel grid and find a voxel V 0 that hasn t been processed before. Calculate the average distance d of the points in V 0 to the estimated plane of V 0. If d is smaller than a threshold then create a queue Q and add V 0 to Q. Otherwise, find another V 0; 2: Examine each of the 26 adjacent voxels V i(i =1, 2,..., 26) surrounding V 0.IfV i hasn t been processed, then go on with the calculation. Let n i =(x i,y i,z i)denote the normal of V i and n 0 =(x 0,y 0,z 0) denote the normal of V i. Calculate the cosine of the angle of n i and n 0 : xi x0 + yi y0 + zi z0 cos θ = (2) n i n 0 If cos θ is larger than a threshold, record V i and V 0 belongs to the same plane and then insert V i into Q; 3: Pick up an element from Q and regard it as V 0. Then return to step 2; 4: If Q is empty, return to step 1; 5: Repeat step 1 to step 4 until all the voxels are examined; we calculate the distances between the point and every neighbor planar-voxel. The plane equation of each voxel has been calculated when we create the voxel grid, so we can directly use the result to calculate the distance. Let p denotes the coordinate of one point in the central voxel, V ij denotes the ith planar-voxel in the neighborhood that belongs to plane P j and the plane equation of V ij is Ax + By + Cz + 1 = 0. Then the distance from p to V ij is: d ij = Ax + By + Cz +1 A2 + B 2 + C 2 (3) For each planar-voxel we recorded, the distance is calculated. Then we choose d =mind ij.ifd is smaller than a threshold, then point p is considered a point in the corresponding plane P j. Otherwise point p is a not a point in a plane. All the points in the central voxel are determined like this. After applying this process for all the non-planar-voxels, we can get a refined plane segmentation result. After the fine segmentation, all the planes are segmented. However, some planes may be divided into several parts. So the planes actually belong to one big plane should be merged together. For each segmented plane, we estimate the plane normal based all the points that belong to the plane. If the angle of two normal directions of two planes is smaller than a threshold and the two planes are adjacent with each other, then they are merged as one plane and the new plane would replace these two planes. Repeat these steps until no plane can be merged with another. Fig. 3 presents an example of the result after accurate segmentation and plane merge.
7 28 Z. Wang et al. (a) (b) (c) (d) Fig. 3. An example of plane segmentation. (a) is the original point cloud. (b) shows the result of initial segmentation. Gray points represent the points that haven t been divided to a plane. And points are painted with color if they belong to a plane. (c) is the result of accurate segmentation. (d) is the result after plane merging. 5 Obstacle Detection Plane segmentation is a fundamental technic and can be used in obstacle detection, scene segmentation, environment understanding and so on. We apply our plane segmentation algorithm PVP in an indoor obstacle detection system and it performs very well. The idea of our obstacle detection algorithm is that ground and walls are planes in indoor environment. Obstacles often lie on the ground or beside the walls. If the ground and walls can be removed, the remaining objects are all obstacles. Following this idea, we propose an algorithm to detect obstacles in the point cloud. At first, all the planes are extracted using our plane segmentation algorithm PVP. Then we set a threshold to filter out large planes because they may be ground or walls in the scene, while the small planes could be the surfaces of desks or boxes. Next, the large planes are removed from the point cloud, so the remaining points are potential obstacles. To segment all the obstacles, we adopt an area-growing based algorithm. We do not directly cluster the points in the point cloud because it s too slow. Here we use the same idea as the initial segmentation of our plane segmentation algorithm. A voxel grid is created at first, and then we use the area-growing method to extract every voxel cluster. In area-growing, voxels are clustered if they are adjacent and both contain more than a certain number of points. The area-growing algorithm used for obstacle detection is different with that of the plane segmentation as the growing strategy is different. This technique can correctly handle weakly connected obstacles, preventing them to be clustered together. After area-growing, we can get several voxel clusters. All the points in voxels of the same cluster form an obstacle. The method is simple and fast, which makes it very suitable for indoor navigation system or guiding system. Furthermore, as we detect obstacles in 3D environment, the size, direction and distance of each obstacle can be easily calculated. The above information can be used for automatic navigation of indoor robots or wearable guidance devices.
8 Real-Time Plane Segmentation and Obstacle Detection 29 Fig. 4. Plane segmentation and obstacle detection results of three scenes. The first row is a sample of the corridor. The second is the classroom and the third is the office room. In each row, the first column is the original point cloud generated from depth images. The second column is the result of plane segmentation. Different planes are painted with different colors. The third column is the result of obstacle detection. Different obstacles are painted with different colors. 6 Experimental Results We test the proposed algorithm on the dataset from[11]. The dataset is acquired from a Kinect mounted on a remotely controlled robot. The results have been measured over 30 depth images taken in 3 different scenes: classroom, office room and corridor. Each scene contains 10 images and totally 30 images. Firstly we use our algorithm to segment large planes like ground and walls. Then use the result to detect obstacles in the scene. Fig. 4 shows the results of plane segmentation and obstacle detection of each scene. Our system is deployed on an Intel Core i7 2.0 GHz CPU laptop computer. The algorithms run sequentially on a single thread within a single core. As the detection rates would not be affected by the input image and the CPU we used is the bottom model of i7 series, which means our CPU would be no powerful than the CPU used in[9], we just take the result from[9] to make a comparison. The results are recorded in Table 1. Clearly our method achieves real-time processing at VGA resolution while Holz s method only runs at 7Hz under this resolution. When the image resolution reduced, our method also performs much better than Holz s method. Data shows that our algorithm is truly capable of real-time processing. And if we take advantage of multi-thread technique on multi-core CPU, the frame rate would be improved significantly. A subjective evaluation is also conducted to quantitatively analyze the proposed algorithm. For each depth image, we label all the planes and obstacles in the image and then manually analyze the performance of our system according
9 30 Z. Wang et al. to the ground truth labels. In order to quantitatively measure the accuracy of our system, we calculate two commonly used benchmark recall and precision in our dataset. The results are presented in Table 2. The performance is related to the complexity of the scene. In complex scenes, there are less planar areas so that the plane segmentation results are bad and thus influence the performance of obstacle detection. It is clear that the scenes of the classroom and the corridor are simple and the scenes of the office room are complex. So it is natural that both the recall and precision are high in classroom and corridor. And for a more complex environment, like office room in our experiment, the performance is degraded. Table 1. Frame rate of different resolution. The processing time of our method is measured and compared with Holz s method. Image resolution Holz s method Our method VGA( pixels) 7Hz 25Hz QVGA( pixels) 27Hz 67Hz QQVGA( pixels) 30Hz 130Hz Table 2. Recall and precision of the system. For each scene, the recall and precision is calculated. And then the total results are given. Scene Plane segmentation Obstacle detection recall precision recall precision Classroom 100% 100% 100% 85% Office room 90.91% 96.30% 97.83% 91.30% Corridor 100% 100% 100% 92% Total 97.18% 98.70% 98.94% 89.19% 7 Conclusion In this paper, we propose a two-step plane segmentation method which is capable of real-time segmenting VGA resolution depth images. The depth image is converted into 3D point cloud and a voxel grid is created on it. Then we apply the voxel-wise initial segmentation on the voxel grid. Next, the pixel-wise accurate segmentation is used to refine the result. After plane segmentation, we use the result to implement obstacle detection in indoor environment. We implement the system on a laptop computer and conduct several experiments to evaluate the algorithms. The frame rate is measured and compared with another algorithm and the accuracy is quantitatively analyzed. The experimental result shows our method is fast and accurate. Thus, our system proves to be very suitable for indoor robots navigation or object avoidance systems, which
10 Real-Time Plane Segmentation and Obstacle Detection 31 require high efficiency. As an extension of this work, we are going to combine gray or color information acquired simultaneously from the Kinect to recognize detected objects. Acknowledgments. The research work is supported by the National Nature Science Foundation of China No and the Nature Science Foundation of Ningbo No.2012A References 1. Benavidez, P., Jamshidi, M.: Mobile robot navigation and target tracking system. In: International Conference on System of Systems Engineering (SoSE), pp (June 2011) 2. Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R., Kohli, P., Shotton, J., Hodges, S., Freeman, D., Davison, A., Fitzgibbon, A.: Kinectfusion: real-time 3d reconstruction and interaction using a moving depth camera. In: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, pp (2011) 3. Mann, S., Huang, J., Janzen, R., Lo, R., Rampersad, V., Chen, A., Doha, T.: Blind navigation with a wearable range camera and vibrotactile helmet. In: Proceedings of the 19th ACM International Conference on Multimedia, pp (2011) 4. Comaniciu, D., Meer, P.: Mean shift: a robust approach toward feature space analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, (May 2002) 5. Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient graph-based image segmentation. International Journal of Computer Vision, (September 2004) 6. Zuliani, M., Kenney, C.S., Manjunath, B.S.: The multiransac algorithm and its application to detect planar homographies. In: International Conference on Image Processing, pp (2005) 7. Schwarz, L.A., Mateus, D., Lallemand, J., Navab, N.: Tracking planes with time of flight cameras and j-linkage. In: 2011 IEEE Workshop on Applications of Computer Vision (WACV), pp (2011) 8. Toldo, R., Fusiello, A.: Robust Multiple Structures Estimation with J-Linkage. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part I. LNCS, vol. 5302, pp Springer, Heidelberg (2008) 9. Holz, D., Holzer, S., Rusu, R.B., Behnke, S.: Real-time plane segmentation using rgb-d cameras. In: RoboCup Symposium (2011) 10. Dub, D., Zell, A.: Real-time plane extraction from depth images with the randomized hough transform. In: IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp (2011) 11. Wolf, C., Mille, J., Lombardi, L., Celiktutan, O., Jiu, M., Baccouche, M., Dellandra, E., Bichot, C.E., Garcia, C., Sankur, B.: The liris human activities dataset and the icpr human activities recognition and localization competition. Technical report, LIRIS Laboratory (March 28, 2012)
Robot localization method based on visual features and their geometric relationship
, pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department
More informationLinear Plane Border A Primitive for Range Images Combining Depth Edges and Surface Points
Linear Plane Border A Primitive for Range Images Combining Depth Edges and Surface Points David Jiménez Cabello 1, Sven Behnke 2 and Daniel Pizarro Pérez 3 1 GEINTRA Research Group, University of Alcalá,
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationLearning the Three Factors of a Non-overlapping Multi-camera Network Topology
Learning the Three Factors of a Non-overlapping Multi-camera Network Topology Xiaotang Chen, Kaiqi Huang, and Tieniu Tan National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy
More informationTHREE PRE-PROCESSING STEPS TO INCREASE THE QUALITY OF KINECT RANGE DATA
THREE PRE-PROCESSING STEPS TO INCREASE THE QUALITY OF KINECT RANGE DATA M. Davoodianidaliki a, *, M. Saadatseresht a a Dept. of Surveying and Geomatics, Faculty of Engineering, University of Tehran, Tehran,
More informationMobile Point Fusion. Real-time 3d surface reconstruction out of depth images on a mobile platform
Mobile Point Fusion Real-time 3d surface reconstruction out of depth images on a mobile platform Aaron Wetzler Presenting: Daniel Ben-Hoda Supervisors: Prof. Ron Kimmel Gal Kamar Yaron Honen Supported
More informationA Robust Two Feature Points Based Depth Estimation Method 1)
Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence
More informationA DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS
A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS A. Mahphood, H. Arefi *, School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran,
More informationAccurate 3D Face and Body Modeling from a Single Fixed Kinect
Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationImprovement of SURF Feature Image Registration Algorithm Based on Cluster Analysis
Sensors & Transducers 2014 by IFSA Publishing, S. L. http://www.sensorsportal.com Improvement of SURF Feature Image Registration Algorithm Based on Cluster Analysis 1 Xulin LONG, 1,* Qiang CHEN, 2 Xiaoya
More informationCS 4758: Automated Semantic Mapping of Environment
CS 4758: Automated Semantic Mapping of Environment Dongsu Lee, ECE, M.Eng., dl624@cornell.edu Aperahama Parangi, CS, 2013, alp75@cornell.edu Abstract The purpose of this project is to program an Erratic
More informationApplication of Geometry Rectification to Deformed Characters Recognition Liqun Wang1, a * and Honghui Fan2
6th International Conference on Electronic, Mechanical, Information and Management (EMIM 2016) Application of Geometry Rectification to Deformed Characters Liqun Wang1, a * and Honghui Fan2 1 School of
More informationA New Approach For 3D Image Reconstruction From Multiple Images
International Journal of Electronics Engineering Research. ISSN 0975-6450 Volume 9, Number 4 (2017) pp. 569-574 Research India Publications http://www.ripublication.com A New Approach For 3D Image Reconstruction
More information3D object recognition used by team robotto
3D object recognition used by team robotto Workshop Juliane Hoebel February 1, 2016 Faculty of Computer Science, Otto-von-Guericke University Magdeburg Content 1. Introduction 2. Depth sensor 3. 3D object
More informationHuman Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview
Human Body Recognition and Tracking: How the Kinect Works Kinect RGB-D Camera Microsoft Kinect (Nov. 2010) Color video camera + laser-projected IR dot pattern + IR camera $120 (April 2012) Kinect 1.5 due
More informationOrganized Segmenta.on
Organized Segmenta.on Alex Trevor, Georgia Ins.tute of Technology PCL TUTORIAL @ICRA 13 Overview Mo.va.on Connected Component Algorithm Planar Segmenta.on & Refinement Euclidean Clustering Timing Results
More informationJournal of Chemical and Pharmaceutical Research, 2015, 7(3): Research Article
Available online www.jocpr.com Journal of Chemical and Pharmaceutical Research, 2015, 7(3):2413-2417 Research Article ISSN : 0975-7384 CODEN(USA) : JCPRC5 Research on humanoid robot vision system based
More informationMesh from Depth Images Using GR 2 T
Mesh from Depth Images Using GR 2 T Mairead Grogan & Rozenn Dahyot School of Computer Science and Statistics Trinity College Dublin Dublin, Ireland mgrogan@tcd.ie, Rozenn.Dahyot@tcd.ie www.scss.tcd.ie/
More informationAutomatic Pipeline Generation by the Sequential Segmentation and Skelton Construction of Point Cloud
, pp.43-47 http://dx.doi.org/10.14257/astl.2014.67.11 Automatic Pipeline Generation by the Sequential Segmentation and Skelton Construction of Point Cloud Ashok Kumar Patil, Seong Sill Park, Pavitra Holi,
More informationEfficient Surface and Feature Estimation in RGBD
Efficient Surface and Feature Estimation in RGBD Zoltan-Csaba Marton, Dejan Pangercic, Michael Beetz Intelligent Autonomous Systems Group Technische Universität München RGB-D Workshop on 3D Perception
More informationReal Time Face Detection using Geometric Constraints, Navigation and Depth-based Skin Segmentation on Mobile Robots
Real Time Face Detection using Geometric Constraints, Navigation and Depth-based Skin Segmentation on Mobile Robots Vo Duc My Cognitive Systems Group Computer Science Department University of Tuebingen
More informationHigh-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images
MECATRONICS - REM 2016 June 15-17, 2016 High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images Shinta Nozaki and Masashi Kimura School of Science and Engineering
More informationRobotics Programming Laboratory
Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car
More informationVisual Perception for Robots
Visual Perception for Robots Sven Behnke Computer Science Institute VI Autonomous Intelligent Systems Our Cognitive Robots Complete systems for example scenarios Equipped with rich sensors Flying robot
More informationURBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES
URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES An Undergraduate Research Scholars Thesis by RUI LIU Submitted to Honors and Undergraduate Research Texas A&M University in partial fulfillment
More informationShape Preserving RGB-D Depth Map Restoration
Shape Preserving RGB-D Depth Map Restoration Wei Liu 1, Haoyang Xue 1, Yun Gu 1, Qiang Wu 2, Jie Yang 1, and Nikola Kasabov 3 1 The Key Laboratory of Ministry of Education for System Control and Information
More informationA Fast and High Throughput SQL Query System for Big Data
A Fast and High Throughput SQL Query System for Big Data Feng Zhu, Jie Liu, and Lijie Xu Technology Center of Software Engineering, Institute of Software, Chinese Academy of Sciences, Beijing, China 100190
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationKinect Shadow Detection and Classification
2013 IEEE International Conference on Computer Vision Workshops Kinect Shadow Detection and Classification Teng Deng 1, Hui Li 1, Jianfei Cai 1, Tat-Jen Cham 1, Henry Fuchs 2 1 Nanyang Technological University,
More information(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than
An Omnidirectional Vision System that finds and tracks color edges and blobs Felix v. Hundelshausen, Sven Behnke, and Raul Rojas Freie Universität Berlin, Institut für Informatik Takustr. 9, 14195 Berlin,
More informationDevelopment of a Fall Detection System with Microsoft Kinect
Development of a Fall Detection System with Microsoft Kinect Christopher Kawatsu, Jiaxing Li, and C.J. Chung Department of Mathematics and Computer Science, Lawrence Technological University, 21000 West
More informationAn Image Based 3D Reconstruction System for Large Indoor Scenes
36 5 Vol. 36, No. 5 2010 5 ACTA AUTOMATICA SINICA May, 2010 1 1 2 1,,,..,,,,. : 1), ; 2), ; 3),.,,. DOI,,, 10.3724/SP.J.1004.2010.00625 An Image Based 3D Reconstruction System for Large Indoor Scenes ZHANG
More informationSIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE
SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE S. Hirose R&D Center, TOPCON CORPORATION, 75-1, Hasunuma-cho, Itabashi-ku, Tokyo, Japan Commission
More informationFace Alignment Under Various Poses and Expressions
Face Alignment Under Various Poses and Expressions Shengjun Xin and Haizhou Ai Computer Science and Technology Department, Tsinghua University, Beijing 100084, China ahz@mail.tsinghua.edu.cn Abstract.
More informationTraffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers
Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers A. Salhi, B. Minaoui, M. Fakir, H. Chakib, H. Grimech Faculty of science and Technology Sultan Moulay Slimane
More informationSign Language Recognition using Dynamic Time Warping and Hand Shape Distance Based on Histogram of Oriented Gradient Features
Sign Language Recognition using Dynamic Time Warping and Hand Shape Distance Based on Histogram of Oriented Gradient Features Pat Jangyodsuk Department of Computer Science and Engineering The University
More information3D Line Segment Based Model Generation by RGB-D Camera for Camera Pose Estimation
3D Line Segment Based Model Generation by RGB-D Camera for Camera Pose Estimation Yusuke Nakayama, Hideo Saito, Masayoshi Shimizu, and Nobuyasu Yamaguchi Graduate School of Science and Technology, Keio
More informationAlgorithm research of 3D point cloud registration based on iterative closest point 1
Acta Technica 62, No. 3B/2017, 189 196 c 2017 Institute of Thermomechanics CAS, v.v.i. Algorithm research of 3D point cloud registration based on iterative closest point 1 Qian Gao 2, Yujian Wang 2,3,
More informationScene Text Detection Using Machine Learning Classifiers
601 Scene Text Detection Using Machine Learning Classifiers Nafla C.N. 1, Sneha K. 2, Divya K.P. 3 1 (Department of CSE, RCET, Akkikkvu, Thrissur) 2 (Department of CSE, RCET, Akkikkvu, Thrissur) 3 (Department
More informationRobust color segmentation algorithms in illumination variation conditions
286 CHINESE OPTICS LETTERS / Vol. 8, No. / March 10, 2010 Robust color segmentation algorithms in illumination variation conditions Jinhui Lan ( ) and Kai Shen ( Department of Measurement and Control Technologies,
More informationFOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES
FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES D. Beloborodov a, L. Mestetskiy a a Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University,
More information3D-2D Laser Range Finder calibration using a conic based geometry shape
3D-2D Laser Range Finder calibration using a conic based geometry shape Miguel Almeida 1, Paulo Dias 1, Miguel Oliveira 2, Vítor Santos 2 1 Dept. of Electronics, Telecom. and Informatics, IEETA, University
More informationTexture Sensitive Image Inpainting after Object Morphing
Texture Sensitive Image Inpainting after Object Morphing Yin Chieh Liu and Yi-Leh Wu Department of Computer Science and Information Engineering National Taiwan University of Science and Technology, Taiwan
More informationWatertight Planar Surface Reconstruction of Voxel Data
Watertight Planar Surface Reconstruction of Voxel Data Eric Turner CS 284 Final Project Report December 13, 2012 1. Introduction There are many scenarios where a 3D shape is represented by a voxel occupancy
More informationColored Point Cloud Registration Revisited Supplementary Material
Colored Point Cloud Registration Revisited Supplementary Material Jaesik Park Qian-Yi Zhou Vladlen Koltun Intel Labs A. RGB-D Image Alignment Section introduced a joint photometric and geometric objective
More informationA Linear Approximation Based Method for Noise-Robust and Illumination-Invariant Image Change Detection
A Linear Approximation Based Method for Noise-Robust and Illumination-Invariant Image Change Detection Bin Gao 2, Tie-Yan Liu 1, Qian-Sheng Cheng 2, and Wei-Ying Ma 1 1 Microsoft Research Asia, No.49 Zhichun
More informationDual Back-to-Back Kinects for 3-D Reconstruction
Ho Chuen Kam, Kin Hong Wong and Baiwu Zhang, Dual Back-to-Back Kinects for 3-D Reconstruction, ISVC'16 12th International Symposium on Visual Computing December 12-14, 2016, Las Vegas, Nevada, USA. Dual
More information3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller
3D Computer Vision Depth Cameras Prof. Didier Stricker Oliver Wasenmüller Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More informationA Fast Caption Detection Method for Low Quality Video Images
2012 10th IAPR International Workshop on Document Analysis Systems A Fast Caption Detection Method for Low Quality Video Images Tianyi Gui, Jun Sun, Satoshi Naoi Fujitsu Research & Development Center CO.,
More informationCRF Based Point Cloud Segmentation Jonathan Nation
CRF Based Point Cloud Segmentation Jonathan Nation jsnation@stanford.edu 1. INTRODUCTION The goal of the project is to use the recently proposed fully connected conditional random field (CRF) model to
More informationMulti-view Stereo. Ivo Boyadzhiev CS7670: September 13, 2011
Multi-view Stereo Ivo Boyadzhiev CS7670: September 13, 2011 What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape
More informationMULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION
MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of
More informationLane Detection using Fuzzy C-Means Clustering
Lane Detection using Fuzzy C-Means Clustering Kwang-Baek Kim, Doo Heon Song 2, Jae-Hyun Cho 3 Dept. of Computer Engineering, Silla University, Busan, Korea 2 Dept. of Computer Games, Yong-in SongDam University,
More informationResearch on-board LIDAR point cloud data pretreatment
Acta Technica 62, No. 3B/2017, 1 16 c 2017 Institute of Thermomechanics CAS, v.v.i. Research on-board LIDAR point cloud data pretreatment Peng Cang 1, Zhenglin Yu 1, Bo Yu 2, 3 Abstract. In view of the
More informationStory Unit Segmentation with Friendly Acoustic Perception *
Story Unit Segmentation with Friendly Acoustic Perception * Longchuan Yan 1,3, Jun Du 2, Qingming Huang 3, and Shuqiang Jiang 1 1 Institute of Computing Technology, Chinese Academy of Sciences, Beijing,
More informationMeasurement of Pedestrian Groups Using Subtraction Stereo
Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp
More informationFully Automatic Multi-organ Segmentation based on Multi-boost Learning and Statistical Shape Model Search
Fully Automatic Multi-organ Segmentation based on Multi-boost Learning and Statistical Shape Model Search Baochun He, Cheng Huang, Fucang Jia Shenzhen Institutes of Advanced Technology, Chinese Academy
More informationVideo Inter-frame Forgery Identification Based on Optical Flow Consistency
Sensors & Transducers 24 by IFSA Publishing, S. L. http://www.sensorsportal.com Video Inter-frame Forgery Identification Based on Optical Flow Consistency Qi Wang, Zhaohong Li, Zhenzhen Zhang, Qinglong
More informationResearch on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration
, pp.33-41 http://dx.doi.org/10.14257/astl.2014.52.07 Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration Wang Wei, Zhao Wenbin, Zhao Zhengxu School of Information
More informationAn Automatic Timestamp Replanting Algorithm for Panorama Video Surveillance *
An Automatic Timestamp Replanting Algorithm for Panorama Video Surveillance * Xinguo Yu, Wu Song, Jun Cheng, Bo Qiu, and Bin He National Engineering Research Center for E-Learning, Central China Normal
More informationRoom Reconstruction from a Single Spherical Image by Higher-order Energy Minimization
Room Reconstruction from a Single Spherical Image by Higher-order Energy Minimization Kosuke Fukano, Yoshihiko Mochizuki, Satoshi Iizuka, Edgar Simo-Serra, Akihiro Sugimoto, and Hiroshi Ishikawa Waseda
More informationRegistration of Moving Surfaces by Means of One-Shot Laser Projection
Registration of Moving Surfaces by Means of One-Shot Laser Projection Carles Matabosch 1,DavidFofi 2, Joaquim Salvi 1, and Josep Forest 1 1 University of Girona, Institut d Informatica i Aplicacions, Girona,
More informationKeywords: clustering, construction, machine vision
CS4758: Robot Construction Worker Alycia Gailey, biomedical engineering, graduate student: asg47@cornell.edu Alex Slover, computer science, junior: ais46@cornell.edu Abstract: Progress has been made in
More informationAn Adaptive Threshold LBP Algorithm for Face Recognition
An Adaptive Threshold LBP Algorithm for Face Recognition Xiaoping Jiang 1, Chuyu Guo 1,*, Hua Zhang 1, and Chenghua Li 1 1 College of Electronics and Information Engineering, Hubei Key Laboratory of Intelligent
More informationBuilding Reliable 2D Maps from 3D Features
Building Reliable 2D Maps from 3D Features Dipl. Technoinform. Jens Wettach, Prof. Dr. rer. nat. Karsten Berns TU Kaiserslautern; Robotics Research Lab 1, Geb. 48; Gottlieb-Daimler- Str.1; 67663 Kaiserslautern;
More informationRecovering 6D Object Pose and Predicting Next-Best-View in the Crowd Supplementary Material
Recovering 6D Object Pose and Predicting Next-Best-View in the Crowd Supplementary Material Andreas Doumanoglou 1,2, Rigas Kouskouridas 1, Sotiris Malassiotis 2, Tae-Kyun Kim 1 1 Imperial College London
More informationA Laplacian Based Novel Approach to Efficient Text Localization in Grayscale Images
A Laplacian Based Novel Approach to Efficient Text Localization in Grayscale Images Karthik Ram K.V & Mahantesh K Department of Electronics and Communication Engineering, SJB Institute of Technology, Bangalore,
More informationInteractive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds
1 Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds Takeru Niwa 1 and Hiroshi Masuda 2 1 The University of Electro-Communications, takeru.niwa@uec.ac.jp 2 The University
More informationObject Reconstruction
B. Scholz Object Reconstruction 1 / 39 MIN-Fakultät Fachbereich Informatik Object Reconstruction Benjamin Scholz Universität Hamburg Fakultät für Mathematik, Informatik und Naturwissenschaften Fachbereich
More informationAutomatic Shadow Removal by Illuminance in HSV Color Space
Computer Science and Information Technology 3(3): 70-75, 2015 DOI: 10.13189/csit.2015.030303 http://www.hrpub.org Automatic Shadow Removal by Illuminance in HSV Color Space Wenbo Huang 1, KyoungYeon Kim
More informationA COMPETITION BASED ROOF DETECTION ALGORITHM FROM AIRBORNE LIDAR DATA
A COMPETITION BASED ROOF DETECTION ALGORITHM FROM AIRBORNE LIDAR DATA HUANG Xianfeng State Key Laboratory of Informaiton Engineering in Surveying, Mapping and Remote Sensing (Wuhan University), 129 Luoyu
More informationK-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors
K-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors Shao-Tzu Huang, Chen-Chien Hsu, Wei-Yen Wang International Science Index, Electrical and Computer Engineering waset.org/publication/0007607
More informationCalibration of a rotating multi-beam Lidar
The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Calibration of a rotating multi-beam Lidar Naveed Muhammad 1,2 and Simon Lacroix 1,2 Abstract
More informationChaplin, Modern Times, 1936
Chaplin, Modern Times, 1936 [A Bucket of Water and a Glass Matte: Special Effects in Modern Times; bonus feature on The Criterion Collection set] Multi-view geometry problems Structure: Given projections
More informationAAM Based Facial Feature Tracking with Kinect
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 15, No 3 Sofia 2015 Print ISSN: 1311-9702; Online ISSN: 1314-4081 DOI: 10.1515/cait-2015-0046 AAM Based Facial Feature Tracking
More informationSynchronized Ego-Motion Recovery of Two Face-to-Face Cameras
Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Jinshi Cui, Yasushi Yagi, Hongbin Zha, Yasuhiro Mukaigawa, and Kazuaki Kondo State Key Lab on Machine Perception, Peking University, China {cjs,zha}@cis.pku.edu.cn
More informationAutomatic estimation of the inlier threshold in robust multiple structures fitting.
Automatic estimation of the inlier threshold in robust multiple structures fitting. Roberto Toldo and Andrea Fusiello Dipartimento di Informatica, Università di Verona Strada Le Grazie, 3734 Verona, Italy
More informationA Background Modeling Approach Based on Visual Background Extractor Taotao Liu1, a, Lin Qi2, b and Guichi Liu2, c
4th International Conference on Mechatronics, Materials, Chemistry and Computer Engineering (ICMMCCE 2015) A Background Modeling Approach Based on Visual Background Extractor Taotao Liu1, a, Lin Qi2, b
More informationCellular Learning Automata-Based Color Image Segmentation using Adaptive Chains
Cellular Learning Automata-Based Color Image Segmentation using Adaptive Chains Ahmad Ali Abin, Mehran Fotouhi, Shohreh Kasaei, Senior Member, IEEE Sharif University of Technology, Tehran, Iran abin@ce.sharif.edu,
More informationCamera Calibration from the Quasi-affine Invariance of Two Parallel Circles
Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Yihong Wu, Haijiang Zhu, Zhanyi Hu, and Fuchao Wu National Laboratory of Pattern Recognition, Institute of Automation, Chinese
More informationA Symmetry Operator and Its Application to the RoboCup
A Symmetry Operator and Its Application to the RoboCup Kai Huebner Bremen Institute of Safe Systems, TZI, FB3 Universität Bremen, Postfach 330440, 28334 Bremen, Germany khuebner@tzi.de Abstract. At present,
More informationPupil Localization Algorithm based on Hough Transform and Harris Corner Detection
Pupil Localization Algorithm based on Hough Transform and Harris Corner Detection 1 Chongqing University of Technology Electronic Information and Automation College Chongqing, 400054, China E-mail: zh_lian@cqut.edu.cn
More informationShort Survey on Static Hand Gesture Recognition
Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of
More informationRobust Online 3D Reconstruction Combining a Depth Sensor and Sparse Feature Points
2016 23rd International Conference on Pattern Recognition (ICPR) Cancún Center, Cancún, México, December 4-8, 2016 Robust Online 3D Reconstruction Combining a Depth Sensor and Sparse Feature Points Erik
More informationFast and Effective Interpolation Using Median Filter
Fast and Effective Interpolation Using Median Filter Jian Zhang 1, *, Siwei Ma 2, Yongbing Zhang 1, and Debin Zhao 1 1 Department of Computer Science, Harbin Institute of Technology, Harbin 150001, P.R.
More informationTUBULAR SURFACES EXTRACTION WITH MINIMAL ACTION SURFACES
TUBULAR SURFACES EXTRACTION WITH MINIMAL ACTION SURFACES XIANGJUN GAO Department of Computer and Information Technology, Shangqiu Normal University, Shangqiu 476000, Henan, China ABSTRACT This paper presents
More informationColor Image Segmentation
Color Image Segmentation Yining Deng, B. S. Manjunath and Hyundoo Shin* Department of Electrical and Computer Engineering University of California, Santa Barbara, CA 93106-9560 *Samsung Electronics Inc.
More informationThin Plate Spline Feature Point Matching for Organ Surfaces in Minimally Invasive Surgery Imaging
Thin Plate Spline Feature Point Matching for Organ Surfaces in Minimally Invasive Surgery Imaging Bingxiong Lin, Yu Sun and Xiaoning Qian University of South Florida, Tampa, FL., U.S.A. ABSTRACT Robust
More informationFast K-nearest neighbors searching algorithms for point clouds data of 3D scanning system 1
Acta Technica 62 No. 3B/2017, 141 148 c 2017 Institute of Thermomechanics CAS, v.v.i. Fast K-nearest neighbors searching algorithms for point clouds data of 3D scanning system 1 Zhang Fan 2, 3, Tan Yuegang
More informationAn Extended Line Tracking Algorithm
An Extended Line Tracking Algorithm Leonardo Romero Muñoz Facultad de Ingeniería Eléctrica UMSNH Morelia, Mich., Mexico Email: lromero@umich.mx Moises García Villanueva Facultad de Ingeniería Eléctrica
More informationHand gesture recognition with Leap Motion and Kinect devices
Hand gesture recognition with Leap Motion and devices Giulio Marin, Fabio Dominio and Pietro Zanuttigh Department of Information Engineering University of Padova, Italy Abstract The recent introduction
More informationSimulating a Finite State Mobile Agent System
Simulating a Finite State Mobile Agent System Liu Yong, Xu Congfu, Chen Yanyu, and Pan Yunhe College of Computer Science, Zhejiang University, Hangzhou 310027, P.R. China Abstract. This paper analyzes
More informationExploiting Depth Camera for 3D Spatial Relationship Interpretation
Exploiting Depth Camera for 3D Spatial Relationship Interpretation Jun Ye Kien A. Hua Data Systems Group, University of Central Florida Mar 1, 2013 Jun Ye and Kien A. Hua (UCF) 3D directional spatial relationships
More informationSimultaneous Vanishing Point Detection and Camera Calibration from Single Images
Simultaneous Vanishing Point Detection and Camera Calibration from Single Images Bo Li, Kun Peng, Xianghua Ying, and Hongbin Zha The Key Lab of Machine Perception (Ministry of Education), Peking University,
More informationReconstruction of 3D Models Consisting of Line Segments
Reconstruction of 3D Models Consisting of Line Segments Naoto Ienaga (B) and Hideo Saito Department of Information and Computer Science, Keio University, Yokohama, Japan {ienaga,saito}@hvrl.ics.keio.ac.jp
More informationDefinition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos
Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos Sung Chun Lee, Chang Huang, and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA sungchun@usc.edu,
More informationLearning Semantic Environment Perception for Cognitive Robots
Learning Semantic Environment Perception for Cognitive Robots Sven Behnke University of Bonn, Germany Computer Science Institute VI Autonomous Intelligent Systems Some of Our Cognitive Robots Equipped
More information2 Proposed Methodology
3rd International Conference on Multimedia Technology(ICMT 2013) Object Detection in Image with Complex Background Dong Li, Yali Li, Fei He, Shengjin Wang 1 State Key Laboratory of Intelligent Technology
More informationCrystal Voronoi Diagram and Its Applications to Collision-Free Paths
Crystal Voronoi Diagram and Its Applications to Collision-Free Paths Kei Kobayashi 1 and Kokichi Sugihara 2 1 University of Tokyo, Hongo, Bunkyo-ku, Tokyo 113-8656, Japan, kkoba@stat.t.u-tokyo.ac.jp 2
More information