SIFT-Based Localization Using a Prior World Model for Robot Navigation in Urban Environments
|
|
- Melissa Cummings
- 5 years ago
- Views:
Transcription
1 SIFT-Based Localization Using a Prior World Model for Robot Navigation in Urban Environments H. Viggh and K. Ni MIT Lincoln Laboratory, Lexington, MA USA Abstract - This project tested the use of a prior world model for robot navigation in an urban environment where Global Positioning System (GPS) can have poor performance due to multi-path issues. A spatially sampled world model was used that consisted of Scale Invariant Feature Transform (SIFT) keypoints extracted from tourist-like photos and georegistered using airborne LIDAR data. A pushcart emulating a mobile robot platform was equipped with digital cameras and GPS units and then maneuvered through the same area as the prior model. SIFT features detected in the new photos were matched against the prior model s SIFT keypoints and used to localize the cameras in relation to the prior model. The accuracy of these SIFT-based localizations were analyzed and found to have half the geo-localization error as the GPS. Such a model-based geo-location approach can therefore enable navigation performance at least twice as good as GPS-based navigation in urban environments. Keywords: autonomous vehicle; mobile robot; localization; navigation; SIFT; GPS; urban 1 Introduction Autonomous vehicles such as mobile robots are limited in the sensors and data processors that they can carry due to size, weight, and power constraints. When operating in a netcentric environment, robots can access resources on the network to augment their onboard capabilities. These resources can include data collected by sensors on other robots, from airborne platforms, and on the ground, as well as offboard processing resources for world modeling, mission planning, perception, and navigation. When a robot utilizes these offboard resources, the robotic system effectively becomes distributed, with the mobile robot platform acting as an end effector that carries only those sensors and processing capabilities that need to be on the mobile platform to execute its mission. This will be referred as Distributed RObotics in a Netcentric Environment (DRONE). The available communications bandwidth between the robot and the network will drive the overall system design of what needs to be on the robot and what can be on the network. The physics of the sensing and perception process, such as the required aperture, range to target, view angle, etc., will require that certain mission critical sensors be on the mobile robot platform. Similarly, fast reaction for critical This work is sponsored by the United States Department of Defense under Air Force Contract # FA C Opinions, interpretations, recommendations and conclusions are those of the authors and are not necessarily endorsed by the United States Government. sensing-based decision making will drive other sensing and processing capabilities onto the mobile platform as well. However, all other data processing of the sensor data collected by the mobile robot could be sent offboard for processing if enough bandwidth is available. Likewise, data collected by offboard sensors and information extracted from such data could be sent to the robot for processing, especially if it is to be combined with local sensor data. One useful resource on the network is a model of the world built and updated from offboard sensors and other data sources. Such a model could guide the mobile robot and its interactions with the environment, aid in mission planning, provide options for navigation, and segment its understanding of the world into dynamic and static components. Examples of world models have been demonstrated in multi-modal three dimensional (3D) fused datasets built from airborne 3D LIDAR and aerial video or ground video, as described in [1] and [2]. By its nature, LIDAR point clouds are inherently 3D, but incorporating images and video into the world model requires the extraction of features, where the Scale-Invariant Feature Transform (SIFT), developed by Lowe [3], and its variants have been widely used. A recent successful construction of world models from SIFT features, a procedure that is employed in this paper, relies on sparse reconstruction of 3D urban structure from motion (SfM) in random collections of tourist photographs. These models are constructed with techniques pioneered by Snavely [4][5] and implemented in the commercial Photosynth software. In order to exploit the world model, it is necessary for the robot to first be able to reference it in a meaningful way. That is, a robot needs to register itself to the model in order to address its environment. Satellite-based Global Positioning System (GPS) localizations are often used for mobile robot navigation, but in urban environments, tall buildings with reflective surfaces frequently induce GPS multi-path errors. Due to the passive nature of image sensors, localization and registration with photos do not suffer from such errors, and offer a more robust approach to urban navigation. Because of its image matching capabilities, the same SIFT features implemented for world modeling are exceedingly useful for robotic navigation and localization. It is possible for a robot to self-localize by matching and referencing a photo to 3D reconstructions, whose sample points are SIFT features. SIFT feature localization has been
2 popular in robotic applications [6][7][8][9]. Most of these applications have been done indoors and relied on SLAM [10] for building the world model by the robot itself. The original contribution of this paper is a demonstration of robot localization in an urban environment using an offboard prior world model generated from a random tourist photo collection. To compare the performance of SIFT-based localizations to GPS, the authors collected GPS and photos in an urban environment using a pushcart, emulating local data collection from a mobile robot platform. The photos and GPS were collected in an area for which a prior world SIFT model existed which was built using a collection of random photos taken by pedestrians. The GPS localization accuracy was compared to a model-based localization technique that detects Scale Invariant Feature Transform (SIFT) features in the local camera photos and matches them to SIFT features in the prior world model. Multiple feature matches were then used to localize camera positions relative to the geo-referenced prior model. This same technique could also be used for non-robot applications that require localization and navigation in an urban environment. 2 SIFT-based geo-referencing 2.1 Prior world model This project utilized an existing prior world model consisting of a 3D point cloud of SIFT features detected on buildings in and near MIT East Campus in Cambridge, MA [10]. The 128 dimensional SIFT feature vectors, also referred to as keypoints, tend to be invariant to rotation, large changes in image scale, and slight changes in illumination. The 3D keypoint cloud fused with aerial LIDAR was generated under an internally funded Lincoln Laboratory program. Research under that program generated a 3D SIFT keypoint cloud from a prior set of photos and then matched SIFT features from Figure 1. Fused Lidar and SIFT point cloud. new photos to the cloud. Using this approach, new photos with unknown location can be localized within the prior keypoint cloud. A mobile robot could therefore use this same approach to localize itself based on the localizations of photos taken by the robot s cameras. If the keypoint cloud is georegistered, then so will be the robot s location. Localization here refers to finding the location in an arbitrary reference frame, while geo-registration refers to converting the localization into a geo-referenced frame such as UTM. 2.2 Point cloud generation The prior SIFT keypoint cloud can be thought of as a spatially sampled 3D model of the world, where the samples are taken where distinctive SIFT features are located. Such point clouds could be generated from prior sensor data collected by other ground robots, low flying aircraft (manned or unmanned), or by people walking with cameras. The point cloud used for this project was generated using 2317 photos taken randomly on five separate days in July The photos were taken by MIT Lincoln Laboratory staff members using various consumer digital cameras while walking through the area, emulating tourists taking photos. The process for building the 3D cloud of SIFT keypoints from these photos is described in [11]. Briefly, SIFT features common to multiple photos were used to estimate the relative geometry of the cameras and thereby the relative geometry of the common SIFT keypoints. These common SIFT keypoints are then assigned a 3D coordinate along with the 128 dimensional SIFT feature vector. The resulting 3D SIFT keypoint cloud was then geo-registered by manually aligning it to a geo-registered 3D point cloud from an airborne LIDAR sensor. The LIDAR data came from the Army s Topographic Engineer Center and was collected in Figure 1 depicts the SIFT and LIDAR point clouds plotted together. 2.3 Photo localization and geo-registration The prior SIFT keypoint cloud can be exploited for the purpose of localizing and geo-registering new photos with unknown location [12]. This is done by matching SIFT features in a new 2D photo to SIFT keypoints in the prior 3D SIFT keypoint cloud [13]. When there are sufficient feature matches across the new image, the location and pose of the camera that took the new photo can be estimated. Since the world model s SIFT keypoints are geo-registered, features and positions from the new photo will be referenced to absolute geo-coordinates (i.e., UTM ). 3 Data collection In order to compare the relative accuracy of SIFT-based and GPS localizations, a new collection of simultaneous photos and GPS data was required. This data collection was done on November 18, 2010 in Cambridge, MA within the area covered by the prior SIFT keypoint cloud described in Section 2. This SIFT keypoint cloud can be viewed as the
3 prior world model available on the network described above in the DRONE robot navigation use-case. 3.1 Equipment A pushcart was used to carry the cameras and GPS sensors in place of a mobile robot platform, as shown in Figure 2. The cart was equipped with pneumatic tires to reduce shocks while traversing urban sidewalks and streets. Four Nikon D5000 digital cameras were mounted to the cart pointing to the front, rear, left, and right and pointing up approximately 5 degrees. The D5000 is a consumer Digital Single Lens Reflex (DSLR) camera with a 12.3 Mpixel focal plane sensor. The D5000 was selected because of its built-in interval timer that allows the camera to be programmed to continuously take photos at a specified time interval. Each D5000 was equipped with a Nikon DX AF-S Nikkor 18-55mm zoom lens with auto-focus (AF) and vibration reduction (VR) features. Each camera employed a 16 GB SDRAM memory card for photo storage. The internal clocks of all four cameras were synchronized to within 1 second using a watch recently set to GPS time. Each D5000 camera was equipped with a Nikon GP-1 GPS unit. These GPS sensors were used to geo-tag each collected photo with GPS coordinates. In addition, a Garmin Vista HCx GPS unit was mounted on a wooden pole to lift the Garmin GPS approximately 1 meter above the cameras in an attempt to improve the GPS signal reception and provide an independent GPS data set. 4 Analysis and results 4.1 SIFT-based localization Using the software described in Section 2, each of the collected D5000 photos was processed to find SIFT features and match them to the prior world model SIFT keypoint cloud also described in Section 2. The result for each photo was a text output file that included whether or not a valid localization was found for the photo, and if so, the UTM coordinates. 4.2 Data ingest and pre-processing All further analysis software was written in MATLAB (R2010a). The Nikon JPEG meta-data of each D5000 photo was parsed to extract the date, time, and GPS localization if available. The Nikon JPEG meta-data attribute filemoddate was used as the collection date and time of each photo, since not all photos had a valid GPS localization, either due to lack of sufficient satellite signals in the urban environment or due to a loose cable on the front camera s GPS unit. For those photos that did have a valid GPS localization, the filemoddate was automatically synched to the GPS date and time by the camera. GPS latitude-longitude was converted to UTM coordinates. The output text files from the SIFT-based localization of each D5000 photos were parsed and the UTM coordinates of the SIFT localization, if available, were extracted. 3.2 Procedure Data collection was done between approximately 10:30AM and 12:30PM. The weather was generally clear with blue skies and a few high cirrus clouds. The lens on each D5000 camera was set to 24mm, VR off, and AF on. Each D5000 camera as set to store images in Large (12MP) Fine Resolution format and the interval between photos set to 5 seconds for most runs. For one run, a 1-second interval was used. Note that the cameras were started manually and only approximately synchronized. The cart was pushed along the path depicted by solid lines in Figure 3, completing two circuits, one in each direction. On the second circuit, the path was on the opposite side of Hayward Street. The cart was pushed at approximately meters/second, stopping as needed to make turns, and occasionally the cart was spun 360 degrees in place. Both of these circuits were done with a photo interval of 5 seconds for each camera and began and ended on Hayward St. One extra northbound leg was done along Carelton St. in front of MIT Medical Department with the photos collected at 1 second intervals. There were four major pauses during the data collection to take notes, during which time the cameras and Garmin GPS units were halted. Figure 2. Data collection pushcart with cameras and GPS.
4 Figure 3. Data collection route and GPS localizations. Figure 3 also depicts the Garmin GPS locations collect on the first circuit as pushpin icons. Comparing these to the actual route, we see that at the top that there are significant GPS errors compared to the actual route taken. This was due to multi-path errors caused by the tall buildings in that area. The SIFT localizations from all four cameras were merged, as were the GPS localizations. Figure 4 plots the merged SIFT localizations (top) and the merged GPS localizations (bottom), with dashed line segments connecting adjacent localizations to highlight sequential changes in position. Note that both the GPS and SIFT localization techniques had spurious localizations that are physically impossible given the adjacent localizations and the velocity the cart was pushed at. Both types of localizations were filtered to remove spurious localizations using a physically plausible velocity cutoff of 2 meters/second. The filtered localizations are plotted in Figure 5. Note that the GPS multipath errors are still present in the filtered GPS data. 4.3 Ground truth estimation In order to compare the accuracy of the GPS and SIFT localizations, ground truth was needed to compare both of them to. The approach taken to generate ground truth was to first use the collected D5000 photos to identify the straight line-segments followed during the data collection. For each Figure 4. Merged SIFT (top) and GPS (bottom) localizations. line segment, the photos collected along the line segment were identified. Next, a high-resolution geo-registered aerial photograph was downloaded from the Massachusetts Office of Geographic Information (MassGIS). Using MATLAB, line segment endpoints were manually selected in the aerial photo and converted to UTM coordinates. It is estimated that the cross-track error of these ground truth line segments is less than 0.5 meters with respect to the image coordinates. Given the size of the pushcart and the separation of the four cameras on it, the estimated accuracy of the ground truth position of each camera is approximately 1 meter. Thirty-two (32) ground truth line segments were defined using this technique and are plotted in Figure 6. Figure 7 shows the filtered SIFT localizations (top) and filtered GPS localizations (bottom) associated with the line segments plotted with the segments. Note that there are a larger number of GPS localizations than SIFT.
5 the SIFT (*) and GPS (+) cross-track RMSEs for these line segments. Note the large GPS errors associated with line segments 3 and 4. These errors are due to the multi-path errors noted in Figures 3 and 4. The average GPS and SIFT RMSEs were then calculated across all of the line segments and are depicted on Figure 8 as the solid (SIFT) and dashed (GPS) horizontal lines. On average, the SIFT localizations had only half the cross-track error as GPS, demonstrating twice the localization accuracy in the urban environment of the data collection. Figure 6. Ground truth line segments. Figure 5. Filtered SIFT (top) and GPS (bottom) localizations. 4.4 Cross-track error calculation In order to compare the accuracy of the GPS and SIFT localizations, an error metric was needed. Ideally, twodimensional error while traversing the ground truth line segments would be used. However, while the pushcart was known to move along each line segment when the corresponding photos were collected, it was not known where along the segment each photo was taken. Therefore, the most reliable error metric is the cross-track error perpendicular to the line segment. Given that the line segments are on a closed route with nearly equal distances traveled in all directions, any biases would be minimized by averaging the cross-track errors from all line segments. Equation 1 describes the crosstrack root mean squared error (RMSE) calculated over the N localizations (either SIFT or GPS) along each line segment.!"#$ =!!!!!!! Twenty five (25) of the 32 line segments had both valid GPS and valid SIFT localizations. Figure 8 shows a plot of (1) 4.5 Prior SIFT cloud sub-sampling This project used an existing SIFT point cloud of opportunity with a random density of SIFT features that was not designed for any specific system performance level. When designing a distributed robotic system that uses SIFTbased localization, one design question is how dense must the prior SIFT-cloud be to achieve a specified localization performance. Therefore, a natural question is how the SIFT localization performance degrades when the prior SIFT cloud has fewer points to match against. To test this, the prior SIFT point cloud was sub-sampled by removing SIFT points to varying degrees and then re-running the SIFT localization processing of all the push cart collected photos. The subsampling factors used were 0.875, 0.75, 0.5, 0.25, and Given that the initial SIFT cloud had 337,057 keypoints, this resulted in sub-sampled point clouds with 294,924, 252,792, 168,528, 84,264, and 42,132 keypoints. 4.6 Effect of sub-sampling on localization As SIFT keypoints are removed from the prior point cloud, fewer of them are available for matching against SIFT features found in each new photo. While some successful photo localizations will have poorer accuracy due to fewer SIFT feature matches with which to triangulate the photo s position, the greatest effect is in the reduction of photos that have successful localizations due to built in accuracy thresholds in the photo matching software. In other words, the
6 Figure 9. Number of total successful SIFT localizations as a function of SIFT keypoints in the prior world model. number of successful localizations along a line segment will diminish as the density of SIFT keypoints in the prior world model is reduced. The effect on robot navigation will be longer distances between localizations, resulting in larger navigation errors during the dead-reckoning between localizations. Figure 7. SIFT (top) and GPS (bottom) localizations associated with line segments (dashed lines). Figure 9 plots the total number of successful SIFT localizations for all four cameras as a function of the number of SIFT keypoints in the prior world model. To first order, we see that the number of SIFT localizations varies linearly with total keypoints in the prior model. 5 Conclusions The use of a prior world model for robot localization in an urban environment was successfully demonstrated. This validates the DRONE concept of using a world model accessible over a network by a mobile robot, and also demonstrates the potential for robot navigation using prior world models built from tourist photo datasets. Since the SIFT-based localization technique is currently processing intensive, one would likely employ a system design where local photos are sent to a localization service on the network. The SIFT-based localization technique employed had twice the average localization accuracy when compared to GPS in an urban environment. However, there tended to be fewer SIFT localizations than GPS localizations in some areas. As the density of SIFT keypoints in the prior model is reduced, the number of successful SIFT localizations tends to fall off linearly. Figure 8. Cross-track RMSE for SIFT and GPS localizations.
7 6 References [1] Vasile, L. Skelly, K. Ni, R. Heinrichs, O. Camps, and M. Sznaier, Efficient City-sized 3D Reconstruction from Ultra- High Resolution Aerial and Ground Video Imagery, to appear in Presentation at the International Symposium on Visual Computing, 2011, Las Vegas, NV, ISCV-2011D. [2] K. Ni, Z. Sun, N. Bliss, "3-D Image Geo-Registration Using Vision-Based Modeling", Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, 2011, Prague, Czech Republic, ICASSP-2011, pp [12] Z. Sun, N. Bliss, and K. Ni, A 3D feature model for image matching," in IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP-2010, 2010, pp [13] K. Ni, Z. Sun, and N. Bliss, 3D image geo-registration using vision-based modeling," in IEEE International Conference on Acoustics, Speech, and Signal Processing, 2011,pp [3] D. G. Lowe, Distinctive image features from scaleinvariant keypoints," International Journal of Computer Vision, vol. 60, pp , [4] N. Snavely, S. M. Seitz, and R. Szeliski, Photo tourism: Exploring photo collections in 3d," in SIGGRAPH Conference Proceedings. New York, NY, USA: ACM Press, 2006, pp [5] N. Snavely, S. M. Seitz, and R. Szeliski, Modeling the world from Internet photo collections," International Journal of Computer Vision, vol. 80, no. 2, pp , November Available at: [6] Se, S.; Lowe, D.G.; Little, J.J.;, "Vision-based global localization and mapping for mobile robots," Robotics, IEEE Transactions on, vol.21, no.3, pp , June 2005 [7] Stephen Se; Lowe, D.; Little, J.;, "Global localization using distinctive visual features," Intelligent Robots and Systems, IEEE/RSJ International Conference on, vol.1, no., pp vol.1, 2002 [8] Se, S.; Lowe, D.; Little, J.;, "Vision-based mobile robot localization and mapping using scale-invariant features," Robotics and Automation, Proceedings 2001 ICRA. IEEE International Conference on, vol.2, no., pp vol.2, 2001 [9] Santosh, D.; Achar, S.; Jawahar, C.V.;, "Autonomous image-based exploration for mobile robot navigation," Robotics and Automation, ICRA IEEE International Conference on, vol., no., pp , May 2008 [10] Durrant-Whyte, H.; Bailey, T. (2006). "Simultaneous Localization and Mapping (SLAM): Part I The Essential Algorithms". Robotics and Automation Magazine 13 (2): [11] P. Cho and N. Snavely, "3D exploitation of 2D groundlevel and aerial imagery", Applied Imagery Pattern Recognition Workshop, Washington DC, Oct 2011.
Cluster-based 3D Reconstruction of Aerial Video
Cluster-based 3D Reconstruction of Aerial Video Scott Sawyer (scott.sawyer@ll.mit.edu) MIT Lincoln Laboratory HPEC 12 12 September 2012 This work is sponsored by the Assistant Secretary of Defense for
More informationPhoto Tourism: Exploring Photo Collections in 3D
Click! Click! Oooo!! Click! Zoom click! Click! Some other camera noise!! Photo Tourism: Exploring Photo Collections in 3D Click! Click! Ahhh! Click! Click! Overview of Research at Microsoft, 2007 Jeremy
More informationCS 4758: Automated Semantic Mapping of Environment
CS 4758: Automated Semantic Mapping of Environment Dongsu Lee, ECE, M.Eng., dl624@cornell.edu Aperahama Parangi, CS, 2013, alp75@cornell.edu Abstract The purpose of this project is to program an Erratic
More informationDomain Adaptation For Mobile Robot Navigation
Domain Adaptation For Mobile Robot Navigation David M. Bradley, J. Andrew Bagnell Robotics Institute Carnegie Mellon University Pittsburgh, 15217 dbradley, dbagnell@rec.ri.cmu.edu 1 Introduction An important
More informationEpipolar geometry-based ego-localization using an in-vehicle monocular camera
Epipolar geometry-based ego-localization using an in-vehicle monocular camera Haruya Kyutoku 1, Yasutomo Kawanishi 1, Daisuke Deguchi 1, Ichiro Ide 1, Hiroshi Murase 1 1 : Nagoya University, Japan E-mail:
More informationFAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES
FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES Jie Shao a, Wuming Zhang a, Yaqiao Zhu b, Aojie Shen a a State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing
More information3D Fusion of Infrared Images with Dense RGB Reconstruction from Multiple Views - with Application to Fire-fighting Robots
3D Fusion of Infrared Images with Dense RGB Reconstruction from Multiple Views - with Application to Fire-fighting Robots Yuncong Chen 1 and Will Warren 2 1 Department of Computer Science and Engineering,
More informationEVOLUTION OF POINT CLOUD
Figure 1: Left and right images of a stereo pair and the disparity map (right) showing the differences of each pixel in the right and left image. (source: https://stackoverflow.com/questions/17607312/difference-between-disparity-map-and-disparity-image-in-stereo-matching)
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationTHREE DIMENSIONAL CURVE HALL RECONSTRUCTION USING SEMI-AUTOMATIC UAV
THREE DIMENSIONAL CURVE HALL RECONSTRUCTION USING SEMI-AUTOMATIC UAV Muhammad Norazam Zulgafli 1 and Khairul Nizam Tahar 1,2 1 Centre of Studies for Surveying Science and Geomatics, Faculty of Architecture
More informationCharacterizing Strategies of Fixing Full Scale Models in Construction Photogrammetric Surveying. Ryan Hough and Fei Dai
697 Characterizing Strategies of Fixing Full Scale Models in Construction Photogrammetric Surveying Ryan Hough and Fei Dai West Virginia University, Department of Civil and Environmental Engineering, P.O.
More informationURBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES
URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES An Undergraduate Research Scholars Thesis by RUI LIU Submitted to Honors and Undergraduate Research Texas A&M University in partial fulfillment
More informationSensor Fusion: Potential, Challenges and Applications. Presented by KVH Industries and Geodetics, Inc. December 2016
Sensor Fusion: Potential, Challenges and Applications Presented by KVH Industries and Geodetics, Inc. December 2016 1 KVH Industries Overview Innovative technology company 600 employees worldwide Focused
More information[Youn *, 5(11): November 2018] ISSN DOI /zenodo Impact Factor
GLOBAL JOURNAL OF ENGINEERING SCIENCE AND RESEARCHES AUTOMATIC EXTRACTING DEM FROM DSM WITH CONSECUTIVE MORPHOLOGICAL FILTERING Junhee Youn *1 & Tae-Hoon Kim 2 *1,2 Korea Institute of Civil Engineering
More informationPhoto Tourism: Exploring Photo Collections in 3D
Photo Tourism: Exploring Photo Collections in 3D SIGGRAPH 2006 Noah Snavely Steven M. Seitz University of Washington Richard Szeliski Microsoft Research 2006 2006 Noah Snavely Noah Snavely Reproduced with
More informationChapters 1 9: Overview
Chapters 1 9: Overview Chapter 1: Introduction Chapters 2 4: Data acquisition Chapters 5 9: Data manipulation Chapter 5: Vertical imagery Chapter 6: Image coordinate measurements and refinements Chapters
More informationEstimating Geospatial Trajectory of a Moving Camera
Estimating Geospatial Trajectory of a Moving Camera Asaad Hakeem 1, Roberto Vezzani 2, Mubarak Shah 1, Rita Cucchiara 2 1 School of Electrical Engineering and Computer Science, University of Central Florida,
More informationPlanetary Rover Absolute Localization by Combining Visual Odometry with Orbital Image Measurements
Planetary Rover Absolute Localization by Combining Visual Odometry with Orbital Image Measurements M. Lourakis and E. Hourdakis Institute of Computer Science Foundation for Research and Technology Hellas
More informationCollecting outdoor datasets for benchmarking vision based robot localization
Collecting outdoor datasets for benchmarking vision based robot localization Emanuele Frontoni*, Andrea Ascani, Adriano Mancini, Primo Zingaretti Department of Ingegneria Infromatica, Gestionale e dell
More informationFast Local Planner for Autonomous Helicopter
Fast Local Planner for Autonomous Helicopter Alexander Washburn talexan@seas.upenn.edu Faculty advisor: Maxim Likhachev April 22, 2008 Abstract: One challenge of autonomous flight is creating a system
More informationABSTRACT 1. INTRODUCTION
Evaluation of image collection requirements for 3D reconstruction using phototourism techniques on sparse overhead data Erin Ontiveros*, Carl Salvaggio, Dave Nilosek, Nina Raqueño and Jason Faulring a
More information3D exploitation of large urban photo archives
3D exploitation of large urban photo archives The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher Peter
More informationHomographies and RANSAC
Homographies and RANSAC Computer vision 6.869 Bill Freeman and Antonio Torralba March 30, 2011 Homographies and RANSAC Homographies RANSAC Building panoramas Phototourism 2 Depth-based ambiguity of position
More informationAided-inertial for GPS-denied Navigation and Mapping
Aided-inertial for GPS-denied Navigation and Mapping Erik Lithopoulos Applanix Corporation 85 Leek Crescent, Richmond Ontario, Canada L4B 3B3 elithopoulos@applanix.com ABSTRACT This paper describes the
More informationMonocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads
Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August 14-15, 2014 Paper No. 127 Monocular Vision Based Autonomous Navigation for Arbitrarily
More informationAugmenting Reality, Naturally:
Augmenting Reality, Naturally: Scene Modelling, Recognition and Tracking with Invariant Image Features by Iryna Gordon in collaboration with David G. Lowe Laboratory for Computational Intelligence Department
More informationA Repository Of Sensor Data For Autonomous Driving Research
A Repository Of Sensor Data For Autonomous Driving Research Michael Shneier, Tommy Chang, Tsai Hong, Gerry Cheok, Harry Scott, Steve Legowik, Alan Lytle National Institute of Standards and Technology 100
More informationPOTENTIAL ACTIVE-VISION CONTROL SYSTEMS FOR UNMANNED AIRCRAFT
26 TH INTERNATIONAL CONGRESS OF THE AERONAUTICAL SCIENCES POTENTIAL ACTIVE-VISION CONTROL SYSTEMS FOR UNMANNED AIRCRAFT Eric N. Johnson* *Lockheed Martin Associate Professor of Avionics Integration, Georgia
More informationGraph-based SLAM (Simultaneous Localization And Mapping) for Bridge Inspection Using UAV (Unmanned Aerial Vehicle)
Graph-based SLAM (Simultaneous Localization And Mapping) for Bridge Inspection Using UAV (Unmanned Aerial Vehicle) Taekjun Oh 1), Sungwook Jung 2), Seungwon Song 3), and Hyun Myung 4) 1), 2), 3), 4) Urban
More informationMap-Enhanced UAV Image Sequence Registration and Synchronization of Multiple Image Sequences
Map-Enhanced UAV Image Sequence Registration and Synchronization of Multiple Image Sequences Yuping Lin and Gérard Medioni Computer Science Department, University of Southern California 941 W. 37th Place,
More informationOutline of Presentation. Introduction to Overwatch Geospatial Software Feature Analyst and LIDAR Analyst Software
Outline of Presentation Automated Feature Extraction from Terrestrial and Airborne LIDAR Presented By: Stuart Blundell Overwatch Geospatial - VLS Ops Co-Author: David W. Opitz Overwatch Geospatial - VLS
More informationUnmanned Aerial Systems: A Look Into UAS at ODOT
Ohio Department of Transportation John R. Kasich, Governor Jerry Wray, Director Unmanned Aerial Systems: Tim Burkholder, PS Mapping Manager Division of Engineering Office of CADD and Mapping Services Kyle
More informationA System of Image Matching and 3D Reconstruction
A System of Image Matching and 3D Reconstruction CS231A Project Report 1. Introduction Xianfeng Rui Given thousands of unordered images of photos with a variety of scenes in your gallery, you will find
More informationSparse 3D Model Reconstruction from photographs
Sparse 3D Model Reconstruction from photographs Kanchalai Suveepattananont ksuvee@seas.upenn.edu Professor Norm Badler badler@seas.upenn.edu Advisor Aline Normoyle alinen@seas.upenn.edu Abstract Nowaday,
More informationA Systems View of Large- Scale 3D Reconstruction
Lecture 23: A Systems View of Large- Scale 3D Reconstruction Visual Computing Systems Goals and motivation Construct a detailed 3D model of the world from unstructured photographs (e.g., Flickr, Facebook)
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem
More informationGeometry of Aerial photogrammetry. Panu Srestasathiern, PhD. Researcher Geo-Informatics and Space Technology Development Agency (Public Organization)
Geometry of Aerial photogrammetry Panu Srestasathiern, PhD. Researcher Geo-Informatics and Space Technology Development Agency (Public Organization) Image formation - Recap The geometry of imaging system
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationDEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION
2012 IEEE International Conference on Multimedia and Expo Workshops DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION Yasir Salih and Aamir S. Malik, Senior Member IEEE Centre for Intelligent
More informationAided-inertial for Long-term, Self-contained GPS-denied Navigation and Mapping
Aided-inertial for Long-term, Self-contained GPS-denied Navigation and Mapping Erik Lithopoulos, Louis Lalumiere, Ron Beyeler Applanix Corporation Greg Spurlock, LTC Bruce Williams Defense Threat Reduction
More information3D Environment Reconstruction
3D Environment Reconstruction Using Modified Color ICP Algorithm by Fusion of a Camera and a 3D Laser Range Finder The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15,
More informationCamera Calibration for a Robust Omni-directional Photogrammetry System
Camera Calibration for a Robust Omni-directional Photogrammetry System Fuad Khan 1, Michael Chapman 2, Jonathan Li 3 1 Immersive Media Corporation Calgary, Alberta, Canada 2 Ryerson University Toronto,
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Extrinsic camera calibration method and its performance evaluation Jacek Komorowski 1 and Przemyslaw Rokita 2 arxiv:1809.11073v1 [cs.cv] 28 Sep 2018 1 Maria Curie Sklodowska University Lublin, Poland jacek.komorowski@gmail.com
More informationReality Modeling Drone Capture Guide
Reality Modeling Drone Capture Guide Discover the best practices for photo acquisition-leveraging drones to create 3D reality models with ContextCapture, Bentley s reality modeling software. Learn the
More information2. TARGET PHOTOS FOR ANALYSIS
Proceedings of the IIEEJ Image Electronics and Visual Computing Workshop 2012 Kuching, Malaysia, November 21-24, 2012 QUANTITATIVE SHAPE ESTIMATION OF HIROSHIMA A-BOMB MUSHROOM CLOUD FROM PHOTOS Masashi
More informationDense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm
Computer Vision Group Prof. Daniel Cremers Dense Tracking and Mapping for Autonomous Quadrocopters Jürgen Sturm Joint work with Frank Steinbrücker, Jakob Engel, Christian Kerl, Erik Bylow, and Daniel Cremers
More informationThe raycloud A Vision Beyond the Point Cloud
The raycloud A Vision Beyond the Point Cloud Christoph STRECHA, Switzerland Key words: Photogrammetry, Aerial triangulation, Multi-view stereo, 3D vectorisation, Bundle Block Adjustment SUMMARY Measuring
More informationObject Classification Using Tripod Operators
Object Classification Using Tripod Operators David Bonanno, Frank Pipitone, G. Charmaine Gilbreath, Kristen Nock, Carlos A. Font, and Chadwick T. Hawley US Naval Research Laboratory, 4555 Overlook Ave.
More informationNAVIGATION AND ELECTRO-OPTIC SENSOR INTEGRATION TECHNOLOGY FOR FUSION OF IMAGERY AND DIGITAL MAPPING PRODUCTS. Alison Brown, NAVSYS Corporation
NAVIGATION AND ELECTRO-OPTIC SENSOR INTEGRATION TECHNOLOGY FOR FUSION OF IMAGERY AND DIGITAL MAPPING PRODUCTS Alison Brown, NAVSYS Corporation Paul Olson, CECOM Abstract Several military and commercial
More informationInternational Journal for Research in Applied Science & Engineering Technology (IJRASET) A Review: 3D Image Reconstruction From Multiple Images
A Review: 3D Image Reconstruction From Multiple Images Rahul Dangwal 1, Dr. Sukhwinder Singh 2 1 (ME Student) Department of E.C.E PEC University of TechnologyChandigarh, India-160012 2 (Supervisor)Department
More informationAn Image Based 3D Reconstruction System for Large Indoor Scenes
36 5 Vol. 36, No. 5 2010 5 ACTA AUTOMATICA SINICA May, 2010 1 1 2 1,,,..,,,,. : 1), ; 2), ; 3),.,,. DOI,,, 10.3724/SP.J.1004.2010.00625 An Image Based 3D Reconstruction System for Large Indoor Scenes ZHANG
More informationAPPLICATION OF AERIAL VIDEO FOR TRAFFIC FLOW MONITORING AND MANAGEMENT
Pitu Mirchandani, Professor, Department of Systems and Industrial Engineering Mark Hickman, Assistant Professor, Department of Civil Engineering Alejandro Angel, Graduate Researcher Dinesh Chandnani, Graduate
More informationRelating Local Vision Measurements to Global Navigation Satellite Systems Using Waypoint Based Maps
Relating Local Vision Measurements to Global Navigation Satellite Systems Using Waypoint Based Maps John W. Allen Samuel Gin College of Engineering GPS and Vehicle Dynamics Lab Auburn University Auburn,
More informationSIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE
SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE S. Hirose R&D Center, TOPCON CORPORATION, 75-1, Hasunuma-cho, Itabashi-ku, Tokyo, Japan Commission
More informationLong-term motion estimation from images
Long-term motion estimation from images Dennis Strelow 1 and Sanjiv Singh 2 1 Google, Mountain View, CA, strelow@google.com 2 Carnegie Mellon University, Pittsburgh, PA, ssingh@cmu.edu Summary. Cameras
More informationShared Perception for Autonomous Systems
Shared Perception for Autonomous Systems Herbert E.M. Viggh, Danelle C. Shah, Peter L. Cho, Nicholas L. Armstrong-Crews, Myra Nam, and Geoffrey Brown Small autonomous vehicles can carry only a limited
More informationPROSILICA GigE Vision Kameras CCD und CMOS
PROSILICA GigE Vision Kameras CCD und CMOS Case Study: GE4900C, GE4000C, GE1910C and GC2450C used in UAV-technology Prosilica Cameras Go Airborne Prosilica Kameras überzeugen mit hervorragender Bildqualität,
More informationLarge Scale 3D Reconstruction by Structure from Motion
Large Scale 3D Reconstruction by Structure from Motion Devin Guillory Ziang Xie CS 331B 7 October 2013 Overview Rome wasn t built in a day Overview of SfM Building Rome in a Day Building Rome on a Cloudless
More informationRobot Localization based on Geo-referenced Images and G raphic Methods
Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,
More informationPerspective Sensing for Inertial Stabilization
Perspective Sensing for Inertial Stabilization Dr. Bernard A. Schnaufer Jeremy Nadke Advanced Technology Center Rockwell Collins, Inc. Cedar Rapids, IA Agenda Rockwell Collins & the Advanced Technology
More informationApplications of Mobile LiDAR and UAV Sourced Photogrammetry
Applications of Mobile LiDAR and UAV Sourced Photogrammetry Thomas J. Pingel and Earle W. Isibue Northern Illinois University 2017 Illinois GIS Association (ILGISA) Annual Meeting October 2-4, 2017 tpingel.org
More informationEvaluating the Performance of a Vehicle Pose Measurement System
Evaluating the Performance of a Vehicle Pose Measurement System Harry Scott Sandor Szabo National Institute of Standards and Technology Abstract A method is presented for evaluating the performance of
More informationWhere s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map
Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Sebastian Scherer, Young-Woo Seo, and Prasanna Velagapudi October 16, 2007 Robotics Institute Carnegie
More informationLOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS
8th International DAAAM Baltic Conference "INDUSTRIAL ENGINEERING - 19-21 April 2012, Tallinn, Estonia LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS Shvarts, D. & Tamre, M. Abstract: The
More informationV-Sentinel: A Novel Framework for Situational Awareness and Surveillance
V-Sentinel: A Novel Framework for Situational Awareness and Surveillance Suya You Integrated Media Systems Center Computer Science Department University of Southern California March 2005 1 Objective Developing
More informationImage correspondences and structure from motion
Image correspondences and structure from motion http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 20 Course announcements Homework 5 posted.
More informationTRAINING MATERIAL HOW TO OPTIMIZE ACCURACY WITH CORRELATOR3D
TRAINING MATERIAL WITH CORRELATOR3D Page2 Contents 1. UNDERSTANDING INPUT DATA REQUIREMENTS... 4 1.1 What is Aerial Triangulation?... 4 1.2 Recommended Flight Configuration... 4 1.3 Data Requirements for
More informationCalibration of a rotating multi-beam Lidar
The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Calibration of a rotating multi-beam Lidar Naveed Muhammad 1,2 and Simon Lacroix 1,2 Abstract
More informationColored Point Cloud Registration Revisited Supplementary Material
Colored Point Cloud Registration Revisited Supplementary Material Jaesik Park Qian-Yi Zhou Vladlen Koltun Intel Labs A. RGB-D Image Alignment Section introduced a joint photometric and geometric objective
More informationGlobal localization from a single feature correspondence
Global localization from a single feature correspondence Friedrich Fraundorfer and Horst Bischof Institute for Computer Graphics and Vision Graz University of Technology {fraunfri,bischof}@icg.tu-graz.ac.at
More informationTAKING FLIGHT JULY/AUGUST 2015 COMMERCIAL UAS ADVANCEMENT BALLOONS THE POOR MAN S UAV? SINGLE PHOTON SENSOR REVIEW VOLUME 5 ISSUE 5
VOLUME 5 ISSUE 5 JULY/AUGUST 2015 TAKING FLIGHT 14 COMMERCIAL UAS ADVANCEMENT 24 BALLOONS THE POOR MAN S UAV? 30 SINGLE PHOTON SENSOR REVIEW Closing in on 500 authorizations the FAA has expedited the exemption
More informationPRECISION ANALYSIS OF VISUAL ODOMETRY BASED ON DISPARITY CHANGING
PRECISION ANALYSIS OF VISUAL ODOMETRY BASED ON DISPARITY CHANGING C. Y. Fu a, J. R. Tsay a a Dept. of Geomatics, National Cheng Kung University, Tainan, Taiwan - (P66044099, tsayjr)@ncku.edu.tw Commission
More informationGPS/GIS Activities Summary
GPS/GIS Activities Summary Group activities Outdoor activities Use of GPS receivers Use of computers Calculations Relevant to robotics Relevant to agriculture 1. Information technologies in agriculture
More informationA Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles
Proceedings of the International Conference of Control, Dynamic Systems, and Robotics Ottawa, Ontario, Canada, May 15-16 2014 Paper No. 54 A Reactive Bearing Angle Only Obstacle Avoidance Technique for
More informationRobotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007
Robotics Project Final Report Computer Science 5551 University of Minnesota December 17, 2007 Peter Bailey, Matt Beckler, Thomas Bishop, and John Saxton Abstract: A solution of the parallel-parking problem
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction
More informationVideo Georegistration: Key Challenges. Steve Blask Harris Corporation GCSD Melbourne, FL 32934
Video Georegistration: Key Challenges Steve Blask sblask@harris.com Harris Corporation GCSD Melbourne, FL 32934 Definitions Registration: image to image alignment Find pixel-to-pixel correspondences between
More information3D reconstruction how accurate can it be?
Performance Metrics for Correspondence Problems 3D reconstruction how accurate can it be? Pierre Moulon, Foxel CVPR 2015 Workshop Boston, USA (June 11, 2015) We can capture large environments. But for
More informationLight Field Occlusion Removal
Light Field Occlusion Removal Shannon Kao Stanford University kaos@stanford.edu Figure 1: Occlusion removal pipeline. The input image (left) is part of a focal stack representing a light field. Each image
More informationOn-line and Off-line 3D Reconstruction for Crisis Management Applications
On-line and Off-line 3D Reconstruction for Crisis Management Applications Geert De Cubber Royal Military Academy, Department of Mechanical Engineering (MSTA) Av. de la Renaissance 30, 1000 Brussels geert.de.cubber@rma.ac.be
More informationStructured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov
Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter
More informationBUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION INTRODUCTION
BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION Ruijin Ma Department Of Civil Engineering Technology SUNY-Alfred Alfred, NY 14802 mar@alfredstate.edu ABSTRACT Building model reconstruction has been
More informationa Geo-Odyssey of UAS LiDAR Mapping Henno Morkel UAS Segment Specialist DroneCon 17 May 2018
a Geo-Odyssey of UAS LiDAR Mapping Henno Morkel UAS Segment Specialist DroneCon 17 May 2018 Abbreviations UAS Unmanned Aerial Systems LiDAR Light Detection and Ranging UAV Unmanned Aerial Vehicle RTK Real-time
More informationLiDAR & Orthophoto Data Report
LiDAR & Orthophoto Data Report Tofino Flood Plain Mapping Data collected and prepared for: District of Tofino, BC 121 3 rd Street Tofino, BC V0R 2Z0 Eagle Mapping Ltd. #201 2071 Kingsway Ave Port Coquitlam,
More informationImprovement of SURF Feature Image Registration Algorithm Based on Cluster Analysis
Sensors & Transducers 2014 by IFSA Publishing, S. L. http://www.sensorsportal.com Improvement of SURF Feature Image Registration Algorithm Based on Cluster Analysis 1 Xulin LONG, 1,* Qiang CHEN, 2 Xiaoya
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationGeometric Rectification of Remote Sensing Images
Geometric Rectification of Remote Sensing Images Airborne TerrestriaL Applications Sensor (ATLAS) Nine flight paths were recorded over the city of Providence. 1 True color ATLAS image (bands 4, 2, 1 in
More informationINVESTIGATION OF 1:1,000 SCALE MAP GENERATION BY STEREO PLOTTING USING UAV IMAGES
INVESTIGATION OF 1:1,000 SCALE MAP GENERATION BY STEREO PLOTTING USING UAV IMAGES S. Rhee a, T. Kim b * a 3DLabs Co. Ltd., 100 Inharo, Namgu, Incheon, Korea ahmkun@3dlabs.co.kr b Dept. of Geoinformatic
More informationStereo and Epipolar geometry
Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka
More informationTHE RANGER-UAV FEATURES
THE RANGER-UAV The Ranger Series Ranger-UAV is designed for the most demanding mapping applications, no compromises made. With a 9 meter laser range, this system produces photorealistic 3D point clouds
More informationImproving Initial Estimations for Structure from Motion Methods
Improving Initial Estimations for Structure from Motion Methods University of Bonn Outline Motivation Computer-Vision Basics Stereo Vision Bundle Adjustment Feature Matching Global Initial Estimation Component
More informationProbabilistic Matching for 3D Scan Registration
Probabilistic Matching for 3D Scan Registration Dirk Hähnel Wolfram Burgard Department of Computer Science, University of Freiburg, 79110 Freiburg, Germany Abstract In this paper we consider the problem
More informationCS4670: Computer Vision
CS4670: Computer Vision Noah Snavely Lecture 6: Feature matching and alignment Szeliski: Chapter 6.1 Reading Last time: Corners and blobs Scale-space blob detector: Example Feature descriptors We know
More informationOn Grid: Tools and Techniques to Place Reality Data in a Geographic Coordinate System
RC21940 On Grid: Tools and Techniques to Place Reality Data in a Geographic Coordinate System Seth Koterba Principal Engineer ReCap Autodesk Ramesh Sridharan Principal Research Engineer Infraworks Autodesk
More informationEECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline
EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)
More informationENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning
1 ENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning Petri Rönnholm Aalto University 2 Learning objectives To recognize applications of laser scanning To understand principles
More informationSTEREO IMAGE POINT CLOUD AND LIDAR POINT CLOUD FUSION FOR THE 3D STREET MAPPING
STEREO IMAGE POINT CLOUD AND LIDAR POINT CLOUD FUSION FOR THE 3D STREET MAPPING Yuan Yang, Ph.D. Student Zoltan Koppanyi, Post-Doctoral Researcher Charles K Toth, Research Professor SPIN Lab The University
More informationResearch on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration
, pp.33-41 http://dx.doi.org/10.14257/astl.2014.52.07 Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration Wang Wei, Zhao Wenbin, Zhao Zhengxu School of Information
More informationAccurate 3D Face and Body Modeling from a Single Fixed Kinect
Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this
More informationData Association for SLAM
CALIFORNIA INSTITUTE OF TECHNOLOGY ME/CS 132a, Winter 2011 Lab #2 Due: Mar 10th, 2011 Part I Data Association for SLAM 1 Introduction For this part, you will experiment with a simulation of an EKF SLAM
More information