Visual map matching and localization using a global feature map

Size: px
Start display at page:

Download "Visual map matching and localization using a global feature map"

Transcription

1 Visual map matching and localization using a global feature map Oliver Pink Institut für Mess- und Regelungstechnik Universität Karlsruhe (TH), Karlsruhe, Germany pink@mrt.uka.de Abstract This paper presents a novel method to support environmental perception of mobile robots by the use of a global feature map. While typical approaches to simultaneous localization and mapping (SLAM) mainly rely on an on-board camera for mapping, our approach uses geographically referenced aerial or satellite images to build a map in advance. The current position on the map is determined by matching features from the on-board camera to the global feature map. The problem of feature matching is posed as a standard point pattern matching problem and a solution using the iterative closest point method is given. The proposed algorithm is designed for use in a street vehicle and uses lane markings as features, but can be adapted to almost any other type of feature that is visible in aerial images. Our approach allows for estimating the robot position at a higher precision than by a purely GPS-based localization, while at the same time providing information about the environment far beyond the current field of view. 1. Introduction Finding the best path from one position to a desired target position while avoiding obstacles is an important task for vehicle and robot navigation. It requires a consistent map of the environment and a precise estimate of the current robot position. Research on the related problem of simultaneous localization and mapping (SLAM) has seen tremendous progress in the recent years (see [19] for an overview). Recent work has shown the possibility of doing camera-based SLAM in real-time [7][13][18]. Other research has focused on the SLAM problem for large-scale outdoor environments [15]. Current research on a field often referred to as visual odometry has shown the possibility of doing motion and position estimation based on camera data instead of using inertial sensors or wheel speed sensors [17][16] [9]. For a geographical position estimate, these systems still depend on a GPS position [1]. The applicability of SLAM algorithms to outdoor environments makes them interesting for use in future driver assistance systems. However, path planning over long distances requires a topological and geometrical representation of the entire road network. A solution to obtain such a road network from LIDAR data is proposed in [14]. However, generating such a map for large areas by SLAM only is very time-consuming. Instead, we will use georeferenced aerial imagery to build a map for visual localization in advance. Matching of the pre-built map and the current camera view is done by visual features only and can be described as a general point pattern matching problem, which is a standard problem in pattern recognition [2][5]. Our solution will use a combination of the iterative closest point algorithm (ICP) [2][21] and an iterative recursive least squares (IRLS)[4][6] robust regression for matching. The result is a position estimate within the feature map that provides knowledge about the drivable area and possible obstacles even in areas the vehicle never has been before. As a side effect of using a georeferenced map for localization, the position estimate can be obtained in geographical coordinates. Figure 1. Idea of an optimal localization result. A global map allows for lane detection and path planning even in occluded areas or outside the current field of view /8/$ IEEE

2 The rest of this paper is organized as follows. In Section 2, the problem is formulated and sufficient landmarks for map matching are discussed. The feature detection for both aerial images and vehicle camera images is described in Section 3. An iterative solution for the point pattern matching problem using the iterative closest point algorithm is given in Section 4. The experimental results for this solution are discussed in Section 5 and Section 6 concludes. 2. Preliminaries For the localization process, two kinds of imagery will be used. The first is recorded by a stereo camera platform mounted inside the vehicle. An example image is given in 2. The second one is georeferenced aerial imagery, which is widely available at high resolutions. Figure 2 shows a small part of such an image. The aerial images we use have a resolution of 1cm per pixel. 3. Feature detection Although the same type of features have to be detected from both types of imagery, the requirements are quite different. While the feature detection from aerial imagery can be done off-line, the feature detection from the camera images has to be done online in real time. Thus for the aerial view, a slow but robust algorithm with a low false detection rate, like e.g. a support vector machine or artificial neural network [11] appears to be reasonable. Currently, map generation is done in three automated and one manual step: Firstly, every image point is classified whether or not it belongs to a lane marking or not. The image points are then clustered according to their Euclidean distance. Finally, the centroids of the clusters are estimated. Additionally, start and end points are estimated for visualization. The resulting map is post-processed manually to remove remaining false detections and to add missing markings. Figure 2. Example vehicle camera image and aerial image Additionally, a GPS measurement will be used to provide a coarse initial position and heading estimate for the proposed matching algorithm. To match those two kinds of imagery, some sort of feature is needed that can be extracted from both images. Basically, a lot of features are possible, but some, like for example lamp posts, traffic signs or road texture, can be detected easily from the vehicle view but are hard to see in the aerial view. Others, like buildings or trees, are visible from both views but have very large dimensions which makes it difficult to determine an exact position. We decided to use lane markings, since they are clearly visible from both views and since their detection is well-discussed in literature [8][1][11]. However, the method can be applied to any kind of landmark or even combinations of different landmarks. Instead of using the start and end positions or the edges of the lane markings, only the centroids will be used for matching. This increases computation speed considerably, since only point-to point distances have to be determined instead of line-to line distances. Therefore, the orientation of a single marking is lost and the matching will be done solely by the geometric relationship of the lane markings. Since the aerial imagery contains no height information, the resulting feature map is just 2-dimensional, thus all detected features are assumed to lie in one ground plane. Figure 3 shows a detail of the aerial image and the corresponding part of the feature map. Figure 3. Part of the aerial image and corresponding feature map For lane marking detection from the camera view, computing time is critical but several false detections are acceptable. Thus we decided to use a relatively simple algorithm

3 which makes use of the Canny edge detector. The detected edge points are then clustered according to their proximity in pixel coordinates. For each cluster, a centroid position is calculated. Additionally, every cluster gets a weighting γ j according to the number of edge points it consists of. Consider the case that one marking was erroneously split up into two or even more clusters, then this marking would have a higher influence on the matching result than a correctly clustered marking has. Weighting every centroid with the number of corresponding points avoids this problem. Figure 4 shows an example result of edge detection and clustering. Figure 4 shows the centroids which will be used for point pattern matching. Some false detections on the engine hood and on the bicycle lane are clearly visible. Figure 4. Example result of the Canny edge detection and clustering and corresponding centroids of the clusters. Finally, the centroids are transformed from pixel coordinates to vehicle coordinates, assuming that they lie in one ground plane. The ground plane is determined in advance using the v-disparity method. Each row of the v-disparity image is given by the disparity histogram of the corresponding row of the disparity image. As shown in [12], every tilted plane in 3D space with zero roll angle becomes a straight line in the v-disparity image. Figures 5 and show an example disparity image and the corresponding v-disparity. Line detection can be done by e.g. a Hough or Radon transform of the v-disparity image. Since the Hough transform would require binarization, we decided to use an efficient implementation of the Radon transform [3]. 4. Feature Matching The problem of estimating the vehicle position in the global scene according to the current camera view has now been posed as matching a set of 2D feature points from 6 25 Figure 5. Overlay of source and disparity image (in pixels) and resulting v-disparity. The ground plane is a straight line in the v-disparity image. the camera view to their respective points in the global 2D scene, where the camera points are assumed to be a subset of the scene points. Since we are dealing with a rigid scene, the transformation between the point sets can be described as a 2D translation and a 1D rotation. Matching two point patterns is a standard problem in computer vision for which many solutions were proposed in the recent years. These solutions mainly differ on their complexity and robustness to outliers. We decided to use the iterative closest point algorithm [21] as a very fast and simple method for point pattern matching. One of the major drawbacks of this method is that it does not find the globally optimal match but is very likely to stick to nearby local minima. Other methods like the ones proposed by van Wamelen et al. [2] or Caetano et al. [5] do find the global optimum but require a larger amount of computing time. Future work will also evaluate possibilities for finding a globally optimal position estimate by making use of these techniques. The n scene points in R 2 from the aerial view are now given as S = s W 1, s W 2,...s W n where the superscript W denotes the world coordinate system, the m template points in R 2 from the camera view are given as T = t V 1, t V 2,...t V m where V denotes the vehicle coordinate system. y W x W y V x W Figure 6. World coordinate system (blue) and vehicle coordinate system (red) Figure 6 shows the relationship between the world coordinate system and the vehicle coordinate system. With the exact vehicle position x W and heading ϕ, the scene points ϕ x V

4 can be transformed from world coordinates to vehicle coordinates by the inverse transformation s V i = R 1 (s W i x W ), (1) where R 1 is the inverse rotation matrix [ ] R 1 cos ϕ sin ϕ =. (2) sin ϕ cos ϕ In this case, every template point would exactly match its corresponding image point, i.e. s V i = t V j for corresponding points s i, t j. For each template point t j, the matching scene point s i will be called m j. Now an initial estimate for the vehicle position x W and heading ϕ is assumed to be known up to a certain error x W and ϕ, i.e. x W = x W + x W (3) ϕ = ϕ + ϕ (4) R = R R. (5) The resulting position estimates of the scene points in vehicle coordinates are ŝ V i = s V i + s V i = R 1 ( ) s W i x W (6) or, using equation 3 ŝ V i = R 1 ( s W i x W x W ). (7) With equations 1 and 5, this can be further rewritten as ŝ V i = R 1 R 1 (s W i x W ) R 1 x W (8) = R 1 s V i R 1 x W. (9) The estimated positions of the scene points result from a rotation R 1 and a translation R 1 x W = x V of the real scene points. Assuming that for every template point t V j, the matching scene point m V j is known, the position error and the translation error can be determined by solving the m equations [ ] ŝ V i = m V cos ϕ sin ϕ j = t W sin ϕ cos ϕ j x V (1) for the variables x V and ϕ by minimizing the objective function m [ ] f( x, ϕ) = cos ϕ sin ϕ t W sin ϕ cos ϕ j x V m V 2 j. j=1 (11) However, this would require the point correspondences to be known in advance. Since this is not the case, points are paired with their closest neighbor. For every template point t j, j {1..m}, the corresponding scene point s i, i {1..n} is chosen according to d(m j, t j ) = min d(s k, t j ), (12) k {1..n} where d(x, y) denotes the Euclidean distance of the points x and y. Since the closest point is not necessarily the best point, and since we will have to cope with additional outliers due to false detections, estimation has to be robust against outliers. The original version of the ICP algorithm [2][21] already includes some measures to increase robustness, such as only allowing a maximum tolerable distance. Additionally, instead of solving the objective function using the common l 2 -norm, we decided to use an iteratively reweighted least squares algorithm that minimizes a mixed l 1 /l 2 -norm for better robustness against outliers. The weighting w j for each point correspondence is chosen according to [6] { d(si, t w j = j ) 1 ɛ d(s i, t j ) > ɛ d(s i, t j ) ɛ. (13) Together with the weightings γ j as a quality measure for each template point (see Section 3), the overall objective function to be minimized becomes m f( x, ϕ) = γ j w j R 1 t W j x V m V j 2. (14) j=1 The resulting estimates for x and ϕ are now used to refine the vehicle position estimate. In the k-th iteration, the next position and heading estimate is determined according to x W k+1 = x W k x W k = x V k R k (15) ϕ k+1 = ϕ k ϕ k. (16) With every new position estimate, the point pairing and the regression are repeated until the vehicle position converges. 5. Experimental Results The proposed pattern matching algorithm was tested on real data from an intersection in Karlsruhe, Germany using georeferenced aerial imagery provided by the city of Karlsruhe, VLW Geodaten. The camera inclination and the reference position were determined in advance Convergence evaluation Figure 7 shows an overlay of the vehicle camera view with the map for a randomly chosen position at a distance

5 2 1,75 1,5 Absolute Distance [m] 1,25 1,75,5, # of Iteration Figure 8. Evolution of the absolute position deviation versus the number of iterations for randomly-chosen initial position deviations between.5 and 2.m. Every line denotes a different initial position. Figure 7. Overlay of the camera view and the lane marking map (yellow) from aerial imagery. : Initial position, : Position estimate after 1 iterations. Only the non-shaded lower part of the image is considered for feature detection. of 2m from the reference position and a heading difference of 15. Figure 7 shows the overlay for the same scene after 1 iterations. Both heading and position estimate are very close to the reference value. The lane markings from the map match the real lane markings up to several centimeters. Even beyond the region of interest, the lanes from the map approximately match the real lanes. To get a more detailed view on the matching results, the absolute position errors for different initial positions are shown in figure 8. For testing, the initial position was moved to a random direction by a random amount between.5m and 2m. Every line in the plot shows the development of the position error over the iterations for a different initial position. Except for one measurement, all results are within 1m from the reference position, most of the results are even within 25cm. In all cases, the final position is reached within 1 iterations, and already after 5 iterations, all positions lie within 1m from the reference position. However, the single measurement that does not converge to the reference position requires some attention. It shows one of the major disadvantages of the iterative closest point algorithm - its susceptibility to local minima. The larger the distance from the original position becomes, the more likely the global optimum will not be reached. Further testing for greater initial distances has confirmed this behavior. For initial positions between 8m and 1m from the reference position, 21 out of 78 results of the results still were within 1m from the reference position. However, for larger distances, a globally optimal pattern matching algorithm would be favorable. Reliable matching at distances up to 2m still is a remarkable result. At a refresh rate of 24Hz, this allows for position tracking at speeds up to 48m/s by just using the position estimate from the last frame as an initial estimate. A better estimate for the next frame would of course be the predicted vehicle position, e.g. from a Kalman filter or particle filter. Filtering would also allow to obtain a more consistent estimate for the vehicle position and will be implemented in the future. Similar to the distance plot, figure 9 shows the development of the heading error for different initial headings between 15 and 15 from the reference heading. After 1 iteration steps, all resulting headings are within 1 from the reference heading Robustness evaluation To evaluate the performance of the proposed matching algorithm compared to standard ICP, the final position estimates of both algorithms are investigated. Figure 11 shows the matching result of standard ICP for an image with

6 Heading error [ ] (c) (d) # of Iteration Figure 9. Evolution of the heading deviation versus the number of iterations for randomly-chosen initial heading deviations between 15 and +15. Every line denotes a different initial position only few outliers and figure 11 shows the matching result for the same image using the proposed robust method. While the lane markings from standard ICP match the real lane markings very well, the results from the robust method are even closer to the optimal solution. The result for standard ICP is considerably influenced by some outliers one the bicycle lane and on the engine hood (see figure 4). Another example in a similar scene, but with fewer lane markings and a greater number of outliers on a preceding vehicle, confirms this observation. Figure 5.2 shows the detected edges for this scene. The resulting final estimate for standard ICP is given in figure 11(c), the result for the robust method is given in figure 11(d). The lane marking overlay for the ICP position estimate is visibly displaced to the right, while the result for the robust algorithm remains within several centimeters from the optimal position. Figure 1. Edge detection result for a scene with preceding vehicles. The edges on the vehicles lead to falsely detected lane markings. While the last two examples had a very large number of good lane markings compared to the number of outliers, the final example is a curved road that has only little lane markings close to the vehicle and a large number of false positives on a preceding vehicle and on curbs that are not part of the map. The edge detection result is given in figure 12. Figure 11. Overlays of different camera views and lane marking map (yellow) using different algorithms. and (c): Standard ICP, and (d): robust ICP. As the resulting overlay of the map and the source image (figure 12) shows, the position and heading estimate remains acceptable despite the large amount of outliers. Although the impact of falsely detected lane markings can be greatly reduced by the proposed robust estimation, their influence is still visible in the given examples. Further work will have to address this problem by e.g. refining the matching result with an appropriate subset of the detected markings. 6. Conclusion and Future Work This paper describes a method to use widely available aerial imagery to support a vehicle s environmental perception and path planning. We introduced an iterated method for matching aerial imagery to vehicle camera images and validated the algorithm on real data. With an initial position and heading of up to 2m / 15 from the reference positions, most of the matching results lie within 25cm / 1 from the reference position. This work shows that the idea of using aerial imagery to support environmental perception is promising. With help of the aerial imagery, the position of lane markings can be determined even at long distances or outside the view of the vehicle. Future research will evaluate possibilities of fully automated feature detection from aerial imagery and whether the detected markings can be used to automatically reconstruct road network topology and geometry. Furthermore, we will evaluate possibilities for globally optimal visual map matching. While the current version of the proposed

7 Figure 12. Results for an image with very few clearly visible lane markings. : edge detection result, : overlay of image and map after 1 iterations. algorithm still relies on an initial GPS estimate, future solutions could use the result of a globally optimal algorithm as initial value. 7. Acknowledgements The author would like to thank the city of Karlsruhe, VLW Geodaten for providing aerial imagery and the Karlsruhe School Of Optics And Photonics (KSOP) for support and funding. References [1] M. Agrawal and K. Konolige. Real-time localization in outdoor environments using stereo vision and inexpensive GPS. Pattern Recognition, 26. ICPR th International Conference on, 3: , -. [2] P. J. Besl and N. D. McKay. A method for registration of 3-D shapes. IEEE Transactions On Pattern Analysis And Machine Intelligence, 14(2): , [3] J. Beyerer and F. Puente Leon. Die Radontransformation in der Digiralen Bildverarbeitung. In Automatisierungstechnik, volume 5, pages , 22. [4] K. P. Bube and R. T. Langan. Hybrid l 1 /l 2 minimization with applications to tomography. Geophysics, 62(4): , [5] T. S. Caetano, T. Caelli, D. Schuurmans, and D. A. C. Barone. Graphical models and point pattern matching. IEEE Transactions On Pattern Analysis And Machine Intelligence, 28(1): , Oct. 26. [6] G. Darche. Iterative l 1 deconvolution. Annual Report of the Stanford Exploration Project, 61:281 31, [7] A. Davison, I. Reid, N. Molton, and O. Stasse. MonoSLAM: Real-time single camera SLAM. IEEE Transactions On Pattern Analysis And Machine Intelligence, 29(6): , June 27. [8] C. Duchow. A novel, signal model based approach to lane detection for use in intersection assistance. In Proceedings of the IEEE Intelligent Transport Systems Conference, pages , 26. [9] J. Horn, A. Bachmann, and T. Dang. Stereo vision based ego-motion estimation with sensor supported subset validation. Proceedings of the IEEE Intelligent Vehicles Symposium 27, pages , June 27. [1] S.-S. Ieng, J.-P. Tarel, and R. Labayrade. On the design of a single lane-markings detector regardless the on-board camera s position. Proceedings of IEEE Intelligent Vehicle Symposium 23, pages , 23. [11] Z. Kim. Realtime lane tracking of curved local road. In Proceedings of the IEEE Intelligent Transport Systems Conference, pages , 26. [12] R. Labayrade, D. Aubert, and J.-P. Tarel. Real time obstacle detection in stereovision on non flat road geometry through v-disparity representation. Proceedings of the IEEE Intelligent Vehicle Symposium, 22, 2: vol.2, June 22. [13] T. Lemaire, C. Berger, I. Jung, and S. Lacroix. Vision-based SLAM: Stereo and monocular approaches. 74(3): , September 27. [14] J. Levinson, M. Montemerlo, and S. Thrun. Map-based precision vehicle localization in urban environments. In Proceedings of Robotics: Science and Systems, Atlanta, USA, June 27. [15] R. Martinez-Cantin and J. Castellanos. Unscented slam for large-scale outdoor environments. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 25, pages , 2-6 Aug. 25. [16] R. Munguia and A. Grau. Monocular SLAM for visual odometry. Intelligent Signal Processing, 27. WISP 27. IEEE International Symposium on, pages 1 6, 3-5 Oct. 27. [17] D. Nister, O. Naroditsky, and J. Bergen. Visual odometry. Proceedings of the 24 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (CVPR) 24, 1:I 652 I 659 Vol.1, 27 June-2 July 24. [18] R. Sim, P. Elinas, and J. J. Little. A study of the raoblackwellised particle filter for efficient and accurate visionbased slam. International Journal of Computer Vision, 74(3):33 318, 27. [19] S. Thrun. Robotic Mapping: A Survey. M. Kaufmann, 22. [2] P. B. van Wamelen, Z. Li, and S. S. Iyengar. A fast expected time algorithm for the 2-D point pattern matching problem. Pattern Recognition, 37(8): , 24. [21] Z. Zhang. Iterative point matching for registration of freeform curves. Tech. Report RR-1658, Institut National de Recherche en Informatique et en Automatique (INRIA), France, March 1992.

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Long-term motion estimation from images

Long-term motion estimation from images Long-term motion estimation from images Dennis Strelow 1 and Sanjiv Singh 2 1 Google, Mountain View, CA, strelow@google.com 2 Carnegie Mellon University, Pittsburgh, PA, ssingh@cmu.edu Summary. Cameras

More information

Indoor Positioning System Based on Distributed Camera Sensor Networks for Mobile Robot

Indoor Positioning System Based on Distributed Camera Sensor Networks for Mobile Robot Indoor Positioning System Based on Distributed Camera Sensor Networks for Mobile Robot Yonghoon Ji 1, Atsushi Yamashita 1, and Hajime Asama 1 School of Engineering, The University of Tokyo, Japan, t{ji,

More information

Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching

Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching Hauke Strasdat, Cyrill Stachniss, Maren Bennewitz, and Wolfram Burgard Computer Science Institute, University of

More information

Robot Localization based on Geo-referenced Images and G raphic Methods

Robot Localization based on Geo-referenced Images and G raphic Methods Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information Proceedings of the World Congress on Electrical Engineering and Computer Systems and Science (EECSS 2015) Barcelona, Spain July 13-14, 2015 Paper No. 335 Efficient SLAM Scheme Based ICP Matching Algorithm

More information

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion 007 IEEE International Conference on Robotics and Automation Roma, Italy, 0-4 April 007 FrE5. Accurate Motion Estimation and High-Precision D Reconstruction by Sensor Fusion Yunsu Bok, Youngbae Hwang,

More information

Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving assistance applications on smart mobile devices

Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving assistance applications on smart mobile devices Technical University of Cluj-Napoca Image Processing and Pattern Recognition Research Center www.cv.utcluj.ro Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving

More information

Egomotion Estimation by Point-Cloud Back-Mapping

Egomotion Estimation by Point-Cloud Back-Mapping Egomotion Estimation by Point-Cloud Back-Mapping Haokun Geng, Radu Nicolescu, and Reinhard Klette Department of Computer Science, University of Auckland, New Zealand hgen001@aucklanduni.ac.nz Abstract.

More information

DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION

DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION 2012 IEEE International Conference on Multimedia and Expo Workshops DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION Yasir Salih and Aamir S. Malik, Senior Member IEEE Centre for Intelligent

More information

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES Jie Shao a, Wuming Zhang a, Yaqiao Zhu b, Aojie Shen a a State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing

More information

Real-Time Detection of Road Markings for Driving Assistance Applications

Real-Time Detection of Road Markings for Driving Assistance Applications Real-Time Detection of Road Markings for Driving Assistance Applications Ioana Maria Chira, Ancuta Chibulcutean Students, Faculty of Automation and Computer Science Technical University of Cluj-Napoca

More information

Using Augmented Measurements to Improve the Convergence of ICP. Jacopo Serafin and Giorgio Grisetti

Using Augmented Measurements to Improve the Convergence of ICP. Jacopo Serafin and Giorgio Grisetti Jacopo Serafin and Giorgio Grisetti Point Cloud Registration We want to find the rotation and the translation that maximize the overlap between two point clouds Page 2 Point Cloud Registration We want

More information

AUTOMATIC EXTRACTION OF BUILDING ROOFS FROM PICTOMETRY S ORTHOGONAL AND OBLIQUE IMAGES

AUTOMATIC EXTRACTION OF BUILDING ROOFS FROM PICTOMETRY S ORTHOGONAL AND OBLIQUE IMAGES AUTOMATIC EXTRACTION OF BUILDING ROOFS FROM PICTOMETRY S ORTHOGONAL AND OBLIQUE IMAGES Yandong Wang Pictometry International Corp. Suite A, 100 Town Centre Dr., Rochester, NY14623, the United States yandong.wang@pictometry.com

More information

Vehicle Localization. Hannah Rae Kerner 21 April 2015

Vehicle Localization. Hannah Rae Kerner 21 April 2015 Vehicle Localization Hannah Rae Kerner 21 April 2015 Spotted in Mtn View: Google Car Why precision localization? in order for a robot to follow a road, it needs to know where the road is to stay in a particular

More information

W4. Perception & Situation Awareness & Decision making

W4. Perception & Situation Awareness & Decision making W4. Perception & Situation Awareness & Decision making Robot Perception for Dynamic environments: Outline & DP-Grids concept Dynamic Probabilistic Grids Bayesian Occupancy Filter concept Dynamic Probabilistic

More information

Sensory Augmentation for Increased Awareness of Driving Environment

Sensory Augmentation for Increased Awareness of Driving Environment Sensory Augmentation for Increased Awareness of Driving Environment Pranay Agrawal John M. Dolan Dec. 12, 2014 Technologies for Safe and Efficient Transportation (T-SET) UTC The Robotics Institute Carnegie

More information

DIGITAL SURFACE MODELS OF CITY AREAS BY VERY HIGH RESOLUTION SPACE IMAGERY

DIGITAL SURFACE MODELS OF CITY AREAS BY VERY HIGH RESOLUTION SPACE IMAGERY DIGITAL SURFACE MODELS OF CITY AREAS BY VERY HIGH RESOLUTION SPACE IMAGERY Jacobsen, K. University of Hannover, Institute of Photogrammetry and Geoinformation, Nienburger Str.1, D30167 Hannover phone +49

More information

Sensor Fusion-Based Parking Assist System

Sensor Fusion-Based Parking Assist System Sensor Fusion-Based Parking Assist System 2014-01-0327 Jaeseob Choi, Eugene Chang, Daejoong Yoon, and Seongsook Ryu Hyundai & Kia Corp. Hogi Jung and Jaekyu Suhr Hanyang Univ. Published 04/01/2014 CITATION:

More information

Camera Calibration for a Robust Omni-directional Photogrammetry System

Camera Calibration for a Robust Omni-directional Photogrammetry System Camera Calibration for a Robust Omni-directional Photogrammetry System Fuad Khan 1, Michael Chapman 2, Jonathan Li 3 1 Immersive Media Corporation Calgary, Alberta, Canada 2 Ryerson University Toronto,

More information

Using temporal seeding to constrain the disparity search range in stereo matching

Using temporal seeding to constrain the disparity search range in stereo matching Using temporal seeding to constrain the disparity search range in stereo matching Thulani Ndhlovu Mobile Intelligent Autonomous Systems CSIR South Africa Email: tndhlovu@csir.co.za Fred Nicolls Department

More information

Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy

Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy Sharjeel Anwar, Dr. Shoaib, Taosif Iqbal, Mohammad Saqib Mansoor, Zubair

More information

Monocular SLAM for a Small-Size Humanoid Robot

Monocular SLAM for a Small-Size Humanoid Robot Tamkang Journal of Science and Engineering, Vol. 14, No. 2, pp. 123 129 (2011) 123 Monocular SLAM for a Small-Size Humanoid Robot Yin-Tien Wang*, Duen-Yan Hung and Sheng-Hsien Cheng Department of Mechanical

More information

Removing Moving Objects from Point Cloud Scenes

Removing Moving Objects from Point Cloud Scenes Removing Moving Objects from Point Cloud Scenes Krystof Litomisky and Bir Bhanu University of California, Riverside krystof@litomisky.com, bhanu@ee.ucr.edu Abstract. Three-dimensional simultaneous localization

More information

Camera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006

Camera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006 Camera Registration in a 3D City Model Min Ding CS294-6 Final Presentation Dec 13, 2006 Goal: Reconstruct 3D city model usable for virtual walk- and fly-throughs Virtual reality Urban planning Simulation

More information

arxiv: v1 [cs.cv] 18 Sep 2017

arxiv: v1 [cs.cv] 18 Sep 2017 Direct Pose Estimation with a Monocular Camera Darius Burschka and Elmar Mair arxiv:1709.05815v1 [cs.cv] 18 Sep 2017 Department of Informatics Technische Universität München, Germany {burschka elmar.mair}@mytum.de

More information

3D-2D Laser Range Finder calibration using a conic based geometry shape

3D-2D Laser Range Finder calibration using a conic based geometry shape 3D-2D Laser Range Finder calibration using a conic based geometry shape Miguel Almeida 1, Paulo Dias 1, Miguel Oliveira 2, Vítor Santos 2 1 Dept. of Electronics, Telecom. and Informatics, IEETA, University

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Ensemble of Bayesian Filters for Loop Closure Detection

Ensemble of Bayesian Filters for Loop Closure Detection Ensemble of Bayesian Filters for Loop Closure Detection Mohammad Omar Salameh, Azizi Abdullah, Shahnorbanun Sahran Pattern Recognition Research Group Center for Artificial Intelligence Faculty of Information

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

OpenStreetSLAM: Global Vehicle Localization using OpenStreetMaps

OpenStreetSLAM: Global Vehicle Localization using OpenStreetMaps OpenStreetSLAM: Global Vehicle Localization using OpenStreetMaps Georgios Floros, Benito van der Zander and Bastian Leibe RWTH Aachen University, Germany http://www.vision.rwth-aachen.de floros@vision.rwth-aachen.de

More information

6D SLAM with Kurt3D. Andreas Nüchter, Kai Lingemann, Joachim Hertzberg

6D SLAM with Kurt3D. Andreas Nüchter, Kai Lingemann, Joachim Hertzberg 6D SLAM with Kurt3D Andreas Nüchter, Kai Lingemann, Joachim Hertzberg University of Osnabrück, Institute of Computer Science Knowledge Based Systems Research Group Albrechtstr. 28, D-4969 Osnabrück, Germany

More information

On Road Vehicle Detection using Shadows

On Road Vehicle Detection using Shadows On Road Vehicle Detection using Shadows Gilad Buchman Grasp Lab, Department of Computer and Information Science School of Engineering University of Pennsylvania, Philadelphia, PA buchmag@seas.upenn.edu

More information

Shape Model-Based 3D Ear Detection from Side Face Range Images

Shape Model-Based 3D Ear Detection from Side Face Range Images Shape Model-Based 3D Ear Detection from Side Face Range Images Hui Chen and Bir Bhanu Center for Research in Intelligent Systems University of California, Riverside, California 92521, USA fhchen, bhanug@vislab.ucr.edu

More information

Occluded Facial Expression Tracking

Occluded Facial Expression Tracking Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento

More information

Model-based segmentation and recognition from range data

Model-based segmentation and recognition from range data Model-based segmentation and recognition from range data Jan Boehm Institute for Photogrammetry Universität Stuttgart Germany Keywords: range image, segmentation, object recognition, CAD ABSTRACT This

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Adaption of Robotic Approaches for Vehicle Localization

Adaption of Robotic Approaches for Vehicle Localization Adaption of Robotic Approaches for Vehicle Localization Kristin Schönherr, Björn Giesler Audi Electronics Venture GmbH 85080 Gaimersheim, Germany kristin.schoenherr@audi.de, bjoern.giesler@audi.de Alois

More information

Epipolar geometry-based ego-localization using an in-vehicle monocular camera

Epipolar geometry-based ego-localization using an in-vehicle monocular camera Epipolar geometry-based ego-localization using an in-vehicle monocular camera Haruya Kyutoku 1, Yasutomo Kawanishi 1, Daisuke Deguchi 1, Ichiro Ide 1, Hiroshi Murase 1 1 : Nagoya University, Japan E-mail:

More information

On Board 6D Visual Sensors for Intersection Driving Assistance Systems

On Board 6D Visual Sensors for Intersection Driving Assistance Systems On Board 6D Visual Sensors for Intersection Driving Assistance Systems S. Nedevschi, T. Marita, R. Danescu, F. Oniga, S. Bota, I. Haller, C. Pantilie, M. Drulea, C. Golban Sergiu.Nedevschi@cs.utcluj.ro

More information

Intelligent Robotics

Intelligent Robotics 64-424 Intelligent Robotics 64-424 Intelligent Robotics http://tams.informatik.uni-hamburg.de/ lectures/2013ws/vorlesung/ir Jianwei Zhang / Eugen Richter Faculty of Mathematics, Informatics and Natural

More information

Intensity Augmented ICP for Registration of Laser Scanner Point Clouds

Intensity Augmented ICP for Registration of Laser Scanner Point Clouds Intensity Augmented ICP for Registration of Laser Scanner Point Clouds Bharat Lohani* and Sandeep Sashidharan *Department of Civil Engineering, IIT Kanpur Email: blohani@iitk.ac.in. Abstract While using

More information

Appearance-based Visual Localisation in Outdoor Environments with an Omnidirectional Camera

Appearance-based Visual Localisation in Outdoor Environments with an Omnidirectional Camera 52. Internationales Wissenschaftliches Kolloquium Technische Universität Ilmenau 10. - 13. September 2007 M. Schenderlein / K. Debes / A. Koenig / H.-M. Gross Appearance-based Visual Localisation in Outdoor

More information

Collecting outdoor datasets for benchmarking vision based robot localization

Collecting outdoor datasets for benchmarking vision based robot localization Collecting outdoor datasets for benchmarking vision based robot localization Emanuele Frontoni*, Andrea Ascani, Adriano Mancini, Primo Zingaretti Department of Ingegneria Infromatica, Gestionale e dell

More information

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press,   ISSN ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information

More information

Jakob Engel, Thomas Schöps, Daniel Cremers Technical University Munich. LSD-SLAM: Large-Scale Direct Monocular SLAM

Jakob Engel, Thomas Schöps, Daniel Cremers Technical University Munich. LSD-SLAM: Large-Scale Direct Monocular SLAM Computer Vision Group Technical University of Munich Jakob Engel LSD-SLAM: Large-Scale Direct Monocular SLAM Jakob Engel, Thomas Schöps, Daniel Cremers Technical University Munich Monocular Video Engel,

More information

Measurement of Pedestrian Groups Using Subtraction Stereo

Measurement of Pedestrian Groups Using Subtraction Stereo Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp

More information

An Angle Estimation to Landmarks for Autonomous Satellite Navigation

An Angle Estimation to Landmarks for Autonomous Satellite Navigation 5th International Conference on Environment, Materials, Chemistry and Power Electronics (EMCPE 2016) An Angle Estimation to Landmarks for Autonomous Satellite Navigation Qing XUE a, Hongwen YANG, Jian

More information

Visual Ego Motion Estimation in Urban Environments based on U-V Disparity

Visual Ego Motion Estimation in Urban Environments based on U-V Disparity 212 Intelligent Vehicles Symposium Alcalá de Henares, Spain, June 3-7, 212 Visual Ego Motion Estimation in Urban Environments based on U-V Disparity Basam Musleh, David Martin, Arturo de la Escalera and

More information

Time-to-Contact from Image Intensity

Time-to-Contact from Image Intensity Time-to-Contact from Image Intensity Yukitoshi Watanabe Fumihiko Sakaue Jun Sato Nagoya Institute of Technology Gokiso, Showa, Nagoya, 466-8555, Japan {yukitoshi@cv.,sakaue@,junsato@}nitech.ac.jp Abstract

More information

Vehicle Ego-localization by Matching In-vehicle Camera Images to an Aerial Image

Vehicle Ego-localization by Matching In-vehicle Camera Images to an Aerial Image Vehicle Ego-localization by Matching In-vehicle Camera Images to an Aerial Image Masafumi NODA 1,, Tomokazu TAKAHASHI 1,2, Daisuke DEGUCHI 1, Ichiro IDE 1, Hiroshi MURASE 1, Yoshiko KOJIMA 3 and Takashi

More information

A NEW AUTOMATIC SYSTEM CALIBRATION OF MULTI-CAMERAS AND LIDAR SENSORS

A NEW AUTOMATIC SYSTEM CALIBRATION OF MULTI-CAMERAS AND LIDAR SENSORS A NEW AUTOMATIC SYSTEM CALIBRATION OF MULTI-CAMERAS AND LIDAR SENSORS M. Hassanein a, *, A. Moussa a,b, N. El-Sheimy a a Department of Geomatics Engineering, University of Calgary, Calgary, Alberta, Canada

More information

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images MECATRONICS - REM 2016 June 15-17, 2016 High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images Shinta Nozaki and Masashi Kimura School of Science and Engineering

More information

Evaluation of a laser-based reference system for ADAS

Evaluation of a laser-based reference system for ADAS 23 rd ITS World Congress, Melbourne, Australia, 10 14 October 2016 Paper number ITS- EU-TP0045 Evaluation of a laser-based reference system for ADAS N. Steinhardt 1*, S. Kaufmann 2, S. Rebhan 1, U. Lages

More information

Simultaneous Pose and Correspondence Determination using Line Features

Simultaneous Pose and Correspondence Determination using Line Features Simultaneous Pose and Correspondence Determination using Line Features Philip David, Daniel DeMenthon, Ramani Duraiswami, and Hanan Samet Department of Computer Science, University of Maryland, College

More information

Simulation of a mobile robot with a LRF in a 2D environment and map building

Simulation of a mobile robot with a LRF in a 2D environment and map building Simulation of a mobile robot with a LRF in a 2D environment and map building Teslić L. 1, Klančar G. 2, and Škrjanc I. 3 1 Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, 1000 Ljubljana,

More information

Cover Page. Abstract ID Paper Title. Automated extraction of linear features from vehicle-borne laser data

Cover Page. Abstract ID Paper Title. Automated extraction of linear features from vehicle-borne laser data Cover Page Abstract ID 8181 Paper Title Automated extraction of linear features from vehicle-borne laser data Contact Author Email Dinesh Manandhar (author1) dinesh@skl.iis.u-tokyo.ac.jp Phone +81-3-5452-6417

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Planetary Rover Absolute Localization by Combining Visual Odometry with Orbital Image Measurements

Planetary Rover Absolute Localization by Combining Visual Odometry with Orbital Image Measurements Planetary Rover Absolute Localization by Combining Visual Odometry with Orbital Image Measurements M. Lourakis and E. Hourdakis Institute of Computer Science Foundation for Research and Technology Hellas

More information

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features Stephen Se, David Lowe, Jim Little Department of Computer Science University of British Columbia Presented by Adam Bickett

More information

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS Setiawan Hadi Mathematics Department, Universitas Padjadjaran e-mail : shadi@unpad.ac.id Abstract Geometric patterns generated by superimposing

More information

Geometrical Feature Extraction Using 2D Range Scanner

Geometrical Feature Extraction Using 2D Range Scanner Geometrical Feature Extraction Using 2D Range Scanner Sen Zhang Lihua Xie Martin Adams Fan Tang BLK S2, School of Electrical and Electronic Engineering Nanyang Technological University, Singapore 639798

More information

Dominant plane detection using optical flow and Independent Component Analysis

Dominant plane detection using optical flow and Independent Component Analysis Dominant plane detection using optical flow and Independent Component Analysis Naoya OHNISHI 1 and Atsushi IMIYA 2 1 School of Science and Technology, Chiba University, Japan Yayoicho 1-33, Inage-ku, 263-8522,

More information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze

More information

Occlusion Robust Multi-Camera Face Tracking

Occlusion Robust Multi-Camera Face Tracking Occlusion Robust Multi-Camera Face Tracking Josh Harguess, Changbo Hu, J. K. Aggarwal Computer & Vision Research Center / Department of ECE The University of Texas at Austin harguess@utexas.edu, changbo.hu@gmail.com,

More information

Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery

Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery 1 Charles TOTH, 1 Dorota BRZEZINSKA, USA 2 Allison KEALY, Australia, 3 Guenther RETSCHER,

More information

BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION INTRODUCTION

BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION INTRODUCTION BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION Ruijin Ma Department Of Civil Engineering Technology SUNY-Alfred Alfred, NY 14802 mar@alfredstate.edu ABSTRACT Building model reconstruction has been

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Fast, Unconstrained Camera Motion Estimation from Stereo without Tracking and Robust Statistics

Fast, Unconstrained Camera Motion Estimation from Stereo without Tracking and Robust Statistics Fast, Unconstrained Camera Motion Estimation from Stereo without Tracking and Robust Statistics Heiko Hirschmüller, Peter R. Innocent and Jon M. Garibaldi Centre for Computational Intelligence, De Montfort

More information

2 OVERVIEW OF RELATED WORK

2 OVERVIEW OF RELATED WORK Utsushi SAKAI Jun OGATA This paper presents a pedestrian detection system based on the fusion of sensors for LIDAR and convolutional neural network based image classification. By using LIDAR our method

More information

Road-marking analysis for autonomous vehicle guidance

Road-marking analysis for autonomous vehicle guidance Road-marking analysis for autonomous vehicle guidance Stefan Vacek Constantin Schimmel Rüdiger Dillmann Institute for Computer Science and Engineering, University of Karlsruhe, Karlsruhe, Germany Abstract

More information

MULTI-MODAL MAPPING. Robotics Day, 31 Mar Frank Mascarich, Shehryar Khattak, Tung Dang

MULTI-MODAL MAPPING. Robotics Day, 31 Mar Frank Mascarich, Shehryar Khattak, Tung Dang MULTI-MODAL MAPPING Robotics Day, 31 Mar 2017 Frank Mascarich, Shehryar Khattak, Tung Dang Application-Specific Sensors Cameras TOF Cameras PERCEPTION LiDAR IMU Localization Mapping Autonomy Robotic Perception

More information

HOUGH TRANSFORM CS 6350 C V

HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges

More information

Exterior Orientation Parameters

Exterior Orientation Parameters Exterior Orientation Parameters PERS 12/2001 pp 1321-1332 Karsten Jacobsen, Institute for Photogrammetry and GeoInformation, University of Hannover, Germany The georeference of any photogrammetric product

More information

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models Introduction ti to Embedded dsystems EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping Gabe Hoffmann Ph.D. Candidate, Aero/Astro Engineering Stanford University Statistical Models

More information

Vehicle Detection Method using Haar-like Feature on Real Time System

Vehicle Detection Method using Haar-like Feature on Real Time System Vehicle Detection Method using Haar-like Feature on Real Time System Sungji Han, Youngjoon Han and Hernsoo Hahn Abstract This paper presents a robust vehicle detection approach using Haar-like feature.

More information

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,

More information

3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit

3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit 3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit 9 Toyomi Fujita and Yuya Kondo Tohoku Institute of Technology Japan 1. Introduction A 3D configuration and terrain sensing

More information

Prof. Jose L. Flores, MS, PS Dept. of Civil Engineering & Surveying

Prof. Jose L. Flores, MS, PS Dept. of Civil Engineering & Surveying Prof. Jose L. Flores, MS, PS Dept. of Civil Engineering & Surveying Problem One of the challenges for any Geographic Information System (GIS) application is to keep the spatial data up to date and accurate.

More information

Stereo Vision Based Traversable Region Detection for Mobile Robots Using U-V-Disparity

Stereo Vision Based Traversable Region Detection for Mobile Robots Using U-V-Disparity Stereo Vision Based Traversable Region Detection for Mobile Robots Using U-V-Disparity ZHU Xiaozhou, LU Huimin, Member, IEEE, YANG Xingrui, LI Yubo, ZHANG Hui College of Mechatronics and Automation, National

More information

Augmented Reality, Advanced SLAM, Applications

Augmented Reality, Advanced SLAM, Applications Augmented Reality, Advanced SLAM, Applications Prof. Didier Stricker & Dr. Alain Pagani alain.pagani@dfki.de Lecture 3D Computer Vision AR, SLAM, Applications 1 Introduction Previous lectures: Basics (camera,

More information

EVALUATION OF WORLDVIEW-1 STEREO SCENES AND RELATED 3D PRODUCTS

EVALUATION OF WORLDVIEW-1 STEREO SCENES AND RELATED 3D PRODUCTS EVALUATION OF WORLDVIEW-1 STEREO SCENES AND RELATED 3D PRODUCTS Daniela POLI, Kirsten WOLFF, Armin GRUEN Swiss Federal Institute of Technology Institute of Geodesy and Photogrammetry Wolfgang-Pauli-Strasse

More information

Improving Simultaneous Mapping and Localization in 3D Using Global Constraints

Improving Simultaneous Mapping and Localization in 3D Using Global Constraints Improving Simultaneous Mapping and Localization in 3D Using Global Constraints Rudolph Triebel and Wolfram Burgard Department of Computer Science, University of Freiburg George-Koehler-Allee 79, 79108

More information

Algorithm research of 3D point cloud registration based on iterative closest point 1

Algorithm research of 3D point cloud registration based on iterative closest point 1 Acta Technica 62, No. 3B/2017, 189 196 c 2017 Institute of Thermomechanics CAS, v.v.i. Algorithm research of 3D point cloud registration based on iterative closest point 1 Qian Gao 2, Yujian Wang 2,3,

More information

Image processing techniques for driver assistance. Razvan Itu June 2014, Technical University Cluj-Napoca

Image processing techniques for driver assistance. Razvan Itu June 2014, Technical University Cluj-Napoca Image processing techniques for driver assistance Razvan Itu June 2014, Technical University Cluj-Napoca Introduction Computer vision & image processing from wiki: any form of signal processing for which

More information

Monocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads

Monocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August 14-15, 2014 Paper No. 127 Monocular Vision Based Autonomous Navigation for Arbitrarily

More information

Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization

Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization Marcus A. Brubaker (Toyota Technological Institute at Chicago) Andreas Geiger (Karlsruhe Institute of Technology & MPI Tübingen) Raquel

More information

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,

More information

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Sebastian Scherer, Young-Woo Seo, and Prasanna Velagapudi October 16, 2007 Robotics Institute Carnegie

More information

6D-Vision: Fusion of Stereo and Motion for Robust Environment Perception

6D-Vision: Fusion of Stereo and Motion for Robust Environment Perception 6D-Vision: Fusion of Stereo and Motion for Robust Environment Perception Uwe Franke, Clemens Rabe, Hernán Badino, and Stefan Gehrig DaimlerChrysler AG, 70546 Stuttgart, Germany {uwe.franke,clemens.rabe,hernan.badino,stefan.gehrig}@daimlerchrysler.com

More information

A Symmetry Operator and Its Application to the RoboCup

A Symmetry Operator and Its Application to the RoboCup A Symmetry Operator and Its Application to the RoboCup Kai Huebner Bremen Institute of Safe Systems, TZI, FB3 Universität Bremen, Postfach 330440, 28334 Bremen, Germany khuebner@tzi.de Abstract. At present,

More information

An Object Detection System using Image Reconstruction with PCA

An Object Detection System using Image Reconstruction with PCA An Object Detection System using Image Reconstruction with PCA Luis Malagón-Borja and Olac Fuentes Instituto Nacional de Astrofísica Óptica y Electrónica, Puebla, 72840 Mexico jmb@ccc.inaoep.mx, fuentes@inaoep.mx

More information

VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem

VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem Presented by: Justin Gorgen Yen-ting Chen Hao-en Sung Haifeng Huang University of California, San Diego May 23, 2017 Original

More information

Robotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM)

Robotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM) Robotics Lecture 7: Simultaneous Localisation and Mapping (SLAM) See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

3D Environment Reconstruction

3D Environment Reconstruction 3D Environment Reconstruction Using Modified Color ICP Algorithm by Fusion of a Camera and a 3D Laser Range Finder The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15,

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

Robot localization method based on visual features and their geometric relationship

Robot localization method based on visual features and their geometric relationship , pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department

More information