Cloud-based Control and vslam through Cooperative Mapping and Localization*

Size: px
Start display at page:

Download "Cloud-based Control and vslam through Cooperative Mapping and Localization*"

Transcription

1 Cloud-based Control and vslam through Cooperative Mapping and Localization* Berat A. Erol, Satish Vaishnav, Joaquin D. Labrado, Patrick Benavidez, and Mo Jamshidi Department of Electrical and Computer Engineering The University of Texas at San Antonio, One UTSA Circle, San Antonio, TX 78249, USA Abstract Simultaneous Localization and Mapping (SLAM) is one of the most widely popular and applied methods designed for more accurate localization and navigation operations. Our experiments showed that vision based mapping helps agents navigate in unknown environments using feature based mapping and localization. Instead of using a classical monocular camera as a vision source, we have decided to use RGB-D (Red, Green, Blue, Depth) cameras for better feature detection, 3D mapping, and localization. This is due to the fact that the RGB-D camera returns depth data as well as the normal RGB data. Moreover, we have applied this method on multiple robots using the concept of cooperative SLAM. This paper illustrates our current research findings and proposes a new architecture based on gathered data from RGB-D cameras, which are the Microsoft Kinect and the ASUS Xtion Pro for 3D mapping and localization. Index Terms Cooperative SLAM, vslam, Robotics, RGB-D, Cloud Computing I. INTRODUCTION GPS-denied localization can be a computational bottleneck for autonomous robotic systems navigating an unknown environment, locating objects of interest, performing automation operations, etc. This problem requires a system to understand the work environment, be aware the obstacles, and map the environment. This is commonly referred as the chicken and egg problem in the literature, since calculating the location requires a map, and to map the area the system should know its current location. Therefore, a system needs to initialize itself before and during the mapping process as well as localize itself simultaneously. Hence, the problem defined as Simultaneous Localization and Mapping (SLAM). The ability to sense the environment is pivotal for any autonomous mobile system that intends to navigate through a known or unknown area. In the past, this was done through the aid of sensors which is returned data that allows the mobile systems to know where they were in the world [1]. Nowadays, there is a easier method to accomplish this task with devices such as Xbox Kinect. a camera provides RGB-D information to the mobile system [2] [4]. RGB-D sensors are now being used to help mobile systems navigate by creating a map of their work environment [1], [5] [8]. * This work was supported by Grant number FA from Air Force Research Laboratory and OSD through a contract with North Carolina Agricultural and Technical State University. Cooperative SLAM is one operational approach that aims to build a common map of entire work environment, and based on a shared data to calculate the locations of the system s agents at the same time. If SLAM operations by one system involves high computational complexity and high power consumption, then the level of the computational complexity and power consumption with multiple systems will be undeniably increased. Therefore, this calls for a solution that will act as a master platform which builds a general map that can be accessed by any system currently in the operation. Additionally, it will process all the sensor data provided by the multiple systems, or any system added for that operation later on. The purpose of this paper is to propose a vision based SLAM system (vslam) includes RGB-D cameras with cloud computing back-end. That is ideal for multiple unmanned ground vehicles (UGV) and aerial vehicles (UAV), and cooperative work flow among them. The algorithm is discussed here in order to better identify key features in the world that could be used in each agents own map, or one that is shared across multiple agents. The complexity of the problem depends on the system and the operation to be carried. Computation of vslam algorithms on a robot is typically not done as it comes with a relatively high cost in computational complexity which in turn affects the power consumption and the positional data update rate. Storage is also an issue as large amounts of data can be acquired for mapping tasks in vslam. Both the storage and computational limits can prohibit extended use of systems relying on vslam. The paper is organized as follows; Section II is the literature survey, which is followed by Section III with proposed architecture. Section IV covers results of the experiments that were discussed, followed by the conclusions, Section V. II. LITERATURE SURVEY In the literature one can see several implementations on cooperative SLAM that use stereo vision camera systems acquiring the data for fusing the cooperative data [5]. Information filters have suitable and convenient characteristics, especially for the multi-sensor data fusing application task. In [6], a vision based SLAM approach is presented using the feedback from RGB-D camera. The features from the

2 color images have been extracted and projected onto the point cloud which was obtained from the depth images. 3D correspondences were used to find correspondences between consecutive images which helps in finding the transformation between the two images. A matrix is used to obtain the map data and hence localization process is done using this map data. In addition to these approaches and research findings, [7] presents a method of loop closing using visual features in conjunction with the laser scan data. The algorithm uses the pose and time information when the image at any given moment matches with the previous images and the scanned patch by the laser at those two moments are compared. Using this information, a covariance is obtained which provides appropriate information for loop closing. For our experiments the implemented algorithm was a Grid Based Fast SLAM by using: a single node, two nodes, four nodes, and eight nodes of a cluster. The execution time was decreased by several orders of magnitude with increasing number of nodes. On the other hand, there were some limitations due to this approach, such as using a separate hardware on server implementations, manual implementation of nodes, as well as neglecting network delays/latencies. A. Feature Detection and Fusion into Map Data The most common method for feature detection in SLAM systems has been SIFT [9], [10], ORB [10], and SURF [1]. These algorithms are widely used for visual odometry, for the building of 3D maps, and to detect objects or points of interest. All three of these techniques are useful in their own right for object recognition. The only difficultly with these algorithms is that each of them require a large amount of images to be stored. This could be handled by the cloud computing by offloading images and running a SIFT or SURF algorithm on the cloud. This is a method that will be tested once the test bed for the system is created. For an encouraging approach, DAVinCi is a Cloud Computing Framework for Service Robots. It acts as Platform as a Service (PaaS), which is designed to perform secondary tasks like mapping, localization and others [11]. Also it works as an active interface between all the heterogeneous agents, the back end computation, and storage through the use of ROS and Hadoop Distributed File System (HDFS). Its role is to act as a master node which runs the ROS name services and maintains the list of publishers and subscribers, as well as receive messages either from the HDFS back end or from other agents/robots. On the other hand, it is important to state that DaVinCi does not provide a real-time implementation. Another method to find features in the mobile systems environment is done through the use of Point Clouds and the Iterative Closest Point (ICP) algorithm [12], [13]. Point clouds allow for systems to use depth information from the RGB-D data stream that the sensor returns. The ICP method is meant to find the smallest distance between two points in a point cloud. This can be done by generating a point cloud and then comparing the individual points. Covariance Intersection (CI) is a method of data fusion which combines two or more states estimates. In addition to this, CI gathers sensor measurements from different platforms having an unknown correlation among them. Improvements in the location determination and mapping process is achieved by using collaborative estimation which is implemented with an information filter and a CI technique. [5] shows less navigation errors by using CI, coping with the memory and computation cost that arises from the increasing size of the augmented state vector matrix, and its corresponding uncertainty, which aims to improve the robustness of Cooperative visual SLAM (vslam). B. Proposed Method for Map Merging For the aim of our experiments, the map points should be transformed from the world coordinate frame to the cameracentered coordinate frame. In other words, we project map points into the image plane. This is done by left-multiplication with a E CW matrix, which represents camera pose [14], E CW stands for a 4x4 transformation matrix from Camera frame C, to World frame W. p jc is the pose of camera in cameracentered coordinate frame, and p jw is the pose of camera in world coordinate frame. Then, p jc = E CW.p jw (1) For visual observation, after tracking a video frame successfully, by using parallel tracking and mapping (PTAM), the pose estimate is scaled by the current estimate for a factor of λ and it is transformed from the front camera coordinate system to the quadrocopters coordinate system, which leads to a direct observation of its pose given by [15], where (x t, y t, z t ) denote the position of the quadcopter in meters in the world coordinates, and (φ t, θ t, ω t ) denote the roll, pitch and yaw angles in degrees, respectively. h p (X t ) := (x t, y t, z t, φ t, θ t, ω t ) T (2) Z p,t := f(e DC.EC, t) (3) Where, E C,t is the estimated camera pose (scaled with λ), E DC is the constant transformation from the camera to the quadcopter, and f is the transformation from the roll-pitchyaw transformation. More details for this stage can be seen in [6], [7], [14] [16] For our experimental studies, two local maps, m 1 and m 2, need to be merged. For this purpose we used point cloud data processing techniques [12], [13]. The first step is to define a global map where this local maps will be placed at appropriate position with properly defined transformations. Then, since two quadrocopters create two local maps, the global map M consists of those two local maps, along with the transformation matrix that containing rotational and translational information of the initial poses of the two quadrocopters. If we construct the T 1 matrix, the transformation Matrix for the first quadcopters, X 01 = (x 01, y 01, z 01, φ 01, θ 01, ω 01 ) defines the initial pose of first agent in Global map M. (x 01, y 01, z 01 ) defines the initial position of the first quadcopter in X, Y and Z directions

3 and (φ 01, θ 01, ω 01 ) is the initial orientation of it with respect to Y, X and Z axis i.e. roll, pitch and yaw. Therefore, Where, T 1 = T x1.t y1.t z1 (4) x 01 T x1 = 0 cosθ 01 sinθ sinθ 01 cosθ cosφ 01 0 sinφ 01 0 T y1 = y 01 sinφ 01 0 cosφ cosω 01 sinω T y1 = sinω 01 cosω z Similar to equation (4), T 2 = T x2.t y2.t z2 is the transformation matrix for the second quadrocopter, and the initial pose of it in the Global map M is X 02 = (x 02, y 02, z 02, φ 02, θ 02, ω 02 ). T x2, T y2 and T z2 are transformation matrices, in similar order as previous ones with respect to x, y and z axis, respectively. On the other hand, the transformation matrix T M [17] is in the form of: [ ] R T T M = (5) 0 1 R defines the rotation about the three axis and T defines the translation with respect to the three axis. The Rotation Matrix constitutes of the following elements: R 11 R 12 R 13 R = R 21 R 22 R 23 R 31 R 32 R 33 Where, R 11 = cosθcosφ R 12 = sinωsinθcosφ cosωsinφ R 13 = cosωsinθcosφ + sinωsinφ R 21 = cosθsinφ R 22 = sinωsinθsinφ + cosωcosφ R 23 = cosωsinθsinφ sinωcosφ R 31 = sinθ R 32 = sinωcosθ R 33 = cosωcosθ The roll angle can be computed as θ=-sin 1 (R 31 ). Moreover, based on the two arguments, the yaw angle can be found by ω=tan 1 (R 32 /R 33 ) or ω=tan 1 [(R 32 /cosθ)/(r 33 /cosθ)] for cosθ < 0 or cosθ > 0, respectively. Lastly, the pitch angle can be calculated from φ=tan 1 [(R 21 /cosθ)/(r 11 /cosθ)]. Furthermore, the final equation of the global map consisting the union of the local maps along with their transformations is M = T 1.m 1 T 2.m 2 (6) III. PROPOSED ARCHITECTURE vslam method will be used in order to detect and identify features in the images that are grabbed by the RGB-D camera. We have used vslam on the cloud to help agents navigate in their environment [4], [18]. This was done through the use of RG-Chromaticity to process and find features in the environment. RG-Chromaticity was used to inspect each pixel for RGB intensity and match it to images stored in the database. This was implemented and tested on a Pioneer2 UGV. The system was guided around the UTSA BSE building 2nd floor and returned to the initial starting position. Using what we have already done we have looked into other methods of feature detection so the system is able to understand its environment efficiently. This technique has some drawbacks due to lighting not being consistent. Fig. 1, shows the general architecture of a cloud VSLAM node using different packages in ROS, such as OpenCV, or the Point Cloud Library (PCL) [12]. ROS is a very useful tool in the creation of this algorithm, as it is designed to handle all the sensor data for the system [16]. The work flow for merging the local maps and creating a global map can be explained as: Obtain local maps by using ROS package. Apply PC smoothing on maps to remove the noisy data. Define a global map M. Apply transformation matrix on local maps. Use the map forming equation (6) to get M. Apply point cloud registration for the two local maps including transformations to obtain the merged map. Apply point cloud registration technique between the maps to merge the two local maps into a global map. Fig. 1: General architecture of a cloud VSLAM node. A. Hardware The RGB-D camera selected for this research is an ASUS Xtion Pro Live sensor. An ODROID-XU4 ARM-based octacore mini computer running Ubuntu XU4 Robotics Edition is used to interface with the RGB-D camera [19].

4 Our system will consist of a UGV, Kobuki TurtleBot2, equipped with an RGB-D sensor, and a ROS ready ODROID- XU4. ROS will allow us to control the TurtleBot2 and process all RGB-D data as well as sensor data, Fig. 2. This setup will allow the UGV to use a vslam algorithm for navigation. Next, we will find features for the vslam algorithm, and pass any features detected to a program designed for localization. Thus, the system can know where it is in its work environment. Our algorithm will decide if an old feature is the same as a newly detected feature. This is important for the system, since it will have to generate a map of the environment for future use. This process will be repeated as the system navigates towards the specified goal to create a more complete map. Fig. 4: Overview of Cooperative Mapping and Localization algorithm IV. EXPERIMENTAL RESULTS This section presents experimental results from a VSLAM algorithm that uses a monocular RGB camera for vision input. Our research lab is the work space for Case 1 and is illustrated in Fig. 3. Fig. 5: The Feature matching technique used for localization Fig. 2: System integration and architecture for the Robots point of view illustrated with ROS messages. Fig. 3: Our experimental setup for quadcopters positions. Two quadcopters are placed in opposite directions. The cameras were facing towards to arrows. Geometric shapes are tables, workstations and chairs. A. Cooperative 3D Mapping Images captured by the monocular cameras of the two quadcopters, and placing them into a global map; then, the features are used to localize the robots as illustrated in Fig. 4. Then, localization technique uses feature matching between the point cloud (PC) data, specifically latest Cooperative Mapping that leads to create the Global Map, of the current view and the global map as shown in Fig. 5. The results in Fig. 6 shows the mapped areas, our research lab, by the two cameras of the quadcopters. Map1 stands for the feedback from the first quadcopter where the points colored as red, and the Map2 from the second quadcopter with white points. Please refer to Fig. 4 for comparing the image frames. In this case, one of the quadrotors initial pose coincides with the origin of Global Frame while the initial pose of the second quadrotor is (0,-1,0,0,0,180). Map1, point cloud feedback from the first quadrotor, and Map2, input from the second quadrotor, are filtered and transformed based on their initial poses and since the first cameras initial pose coincides with the origin, there is no transformation observed. Fig. 7 shows the results of the global map when they are

5 Fig. 6: Map1(red) and Map2(white) before (above) and after (below) filtering for Case 1, our research lab. Fig. 8: Global Map finalized after transformation for Case 2. Fig. 7: Map1 and Map2 are merged in Case 1. combined and then merged. For instance, in Case 2, the initial pose of the first quadcopter is (0,0,0,0,45,0) and the initial pose of the second quadcopter is (0,-0.5,0,0,0,90). For this experimental case, we have chosen a different environment with almost no obstacles around, such as desks and chairs, which has resulted with more precise mapping with lower number of points to filter and lesser noisy map as expected. In Fig. 8, the two maps shows the process of mapping followed by the transformation. Later, these two maps are combined and merged. Fig. 9: Feature Matching for localization of the quadcoptor before (above) and after filtering. Filtering applied with removal of bad matches. B. Self Localization Fig. 9 shows mapping results of the robot before and after filtering out the bad correspondences. The positioning explains the location of the robot, and the 3D pose estimates are the outputs of the Kalman Filter, when the sensor readings incorporates all the correspondences. After neglecting the bad correspondences, the localization of the robot has resulted in better approximation. As we

6 discussed previously considering the yaw angle, the good estimates for correspondences were the outputs of the Kalman Filtering, when the sensor readings incorporates only the good correspondences. Our results can be seen in Fig. 10. Fig. 10: Kalman Filter estimation for the x-coordinates of the quadcopter (above), and the yaw angle estimation after filtering out the bad correspondences. V. CONCLUSION The work done till now builds the map after gathering the data by performing the data processing operations of smoothing and registration i.e. the map is built offline. Also, the localization module estimates the location of the quadcopter in the global map built offline. Hence, one of the immediate future work involves building a framework to perform the operation of cooperative mapping and localization simultaneously. The next future work will be to test the algorithm of cooperative mapping with a RGB-D sensor like the Kinect which might signify the importance of the algorithm in a better way with faster map building operations and much better accuracy. Also, the localization operation will be much faster and more accurate. Cooperative SLAM operations involves dealing with high amount of data due to the huge number of point clouds involved in building the Global Map using the local maps obtained from the quadcopters. As the mapping area increases, using the cooperative operation becomes more sensible so as the map the area faster and also with better accuracy with data fusion.also, better sensors leads to better accuracy. This implies that a cooperative SLAM operation with better sensors will lead to tremendous amount of point cloud data which will practically involve huge amounts of computation and high processing power. This calls in for the use of Cloud Computing which takes care of the high computation and processing power requirements. REFERENCES [1] M. Du, J. Wang, J. Li, H. Cao, G. Cui, J. Fang, J. Lv, and X. Chen, Robot robust object recognition based on fast surf feature matching, in Chinese Automation Congress (CAC), IEEE, 2013, pp [2] F. Endres, J. Hess, J. Sturm, D. Cremers, and W. Burgard, 3-d mapping with an rgb-d camera, Robotics, IEEE Transactions on, vol. 30, no. 1, pp , [3] M. Paton and J. Kosecka, Adaptive rgb-d localization, in Computer and Robot Vision (CRV), 2012 Ninth Conference on. IEEE, 2012, pp [4] P. Benavidez, M. Muppidi, P. Rad, J. Prevost, M. Jamshidi, and L. Brown, Cloud-based realtime robotic visual slam, in Systems Conference (SysCon), th Annual IEEE International, April 2015, pp [5] X. Li and N. Aouf, Experimental research on cooperative vslam for uavs, in Computational Intelligence, Communication Systems and Networks (CICSyN), 2013 Fifth International Conference on. IEEE, 2013, pp [6] F. Endres, J. Hess, N. Engelhard, J. Sturm, D. Cremers, and W. Burgard, An evaluation of the rgb-d slam system, in Robotics and Automation (ICRA), 2012 IEEE International Conference on. IEEE, 2012, pp [7] P. Newman and K. Ho, Slam-loop closing with visually salient features, in Robotics and Automation, ICRA Proceedings of the 2005 IEEE International Conference on. IEEE, 2005, pp [8] G. Bradski et al., The opencv library, Doctor Dobbs Journal, vol. 25, no. 11, pp , [9] D. Wang and D. Liu, Sift-preserving compression of mobile-captured license plate images for recognition, in Wireless Communications and Signal Processing (WCSP), 2014 Sixth International Conference on. IEEE, 2014, pp [10] Y. Qin, H. Xu, and H. Chen, Image feature points matching via improved orb, in Progress in Informatics and Computing (PIC), 2014 International Conference on. IEEE, 2014, pp [11] R. Arumugam, V. R. Enti, L. Bingbing, W. Xiaojun, K. Baskaran, F. F. Kong, K. D. Meng, G. W. Kit et al., Davinci: A cloud computing framework for service robots, in Robotics and Automation (ICRA), 2010 IEEE International Conference on. IEEE, 2010, pp [12] R. B. Rusu and S. Cousins, 3D is here: Point Cloud Library (PCL), in IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, May [13] B. Bellekens, V. Spruyt, R. B. M. Weyn, and R. Berkvens, A survey of rigid 3d pointcloud registration algorithms, in The Fourth International Conference on Ambient Computing, Applications, Services and Technologies. Citeseer, [14] G. Klein and D. Murray, Parallel tracking and mapping for small ar workspaces, in Mixed and Augmented Reality, ISMAR th IEEE and ACM International Symposium on. IEEE, 2007, pp [15] J. Engel, J. Sturm, and D. Cremers, Camera-based navigation of a low-cost quadrocopter, in Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on. IEEE, 2012, pp [16] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng, Ros: an open-source robot operating system, in ICRA workshop on open source software, vol. 3, no. 3.2, 2009, p. 5. [17] G. G. Slabaugh, Computing euler angles from a rotation matrix. [Online]. Available: sbbh653/publications/euler.pdf [18] P. Benavidez, M. Muppidi, and M. Jamshidi, Improving visual slam algorithms for use in realtime robotic applications, in World Automation Congress (WAC), 2014, Aug 2014, pp [19] ODROID Forum View topic - Ubuntu Robotics Edition for ODROID-XU3 (ROS+OpenCV+PCL). [Online]. Available:

Cloud-based Control and vslam through Cooperative Mapping and Localization*

Cloud-based Control and vslam through Cooperative Mapping and Localization* Cloud-based Control and vslam through Cooperative Mapping and Localization* Berat A. Erol, Satish Vaishnav, Joaquin D. Labrado, Patrick Benavidez, and Mo Jamshidi Department of Electrical and Computer

More information

Cloud-based Control and vslam through Cooperative Mapping and Localization

Cloud-based Control and vslam through Cooperative Mapping and Localization Cloud-based Control and vslam through Cooperative Mapping and Localization Berat Alper EROL Autonomous Control Engineering Lab The University of Texas at San Antonio 2016 Outline Ø Introduction Ø Methodology

More information

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information Proceedings of the World Congress on Electrical Engineering and Computer Systems and Science (EECSS 2015) Barcelona, Spain July 13-14, 2015 Paper No. 335 Efficient SLAM Scheme Based ICP Matching Algorithm

More information

Improving Visual SLAM Algorithms for use in Realtime Robotic Applications

Improving Visual SLAM Algorithms for use in Realtime Robotic Applications Improving Visual SLAM Algorithms for use in Realtime Robotic Applications Patrick Benavidez, Mohan Kumar Muppidi, and Mo Jamshidi, Ph.D.Lutcher Brown Endowed Chair Professor Autonomous Control Engineering

More information

Computationally Efficient Visual-inertial Sensor Fusion for GPS-denied Navigation on a Small Quadrotor

Computationally Efficient Visual-inertial Sensor Fusion for GPS-denied Navigation on a Small Quadrotor Computationally Efficient Visual-inertial Sensor Fusion for GPS-denied Navigation on a Small Quadrotor Chang Liu & Stephen D. Prior Faculty of Engineering and the Environment, University of Southampton,

More information

Using Augmented Measurements to Improve the Convergence of ICP. Jacopo Serafin and Giorgio Grisetti

Using Augmented Measurements to Improve the Convergence of ICP. Jacopo Serafin and Giorgio Grisetti Jacopo Serafin and Giorgio Grisetti Point Cloud Registration We want to find the rotation and the translation that maximize the overlap between two point clouds Page 2 Point Cloud Registration We want

More information

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm Computer Vision Group Prof. Daniel Cremers Dense Tracking and Mapping for Autonomous Quadrocopters Jürgen Sturm Joint work with Frank Steinbrücker, Jakob Engel, Christian Kerl, Erik Bylow, and Daniel Cremers

More information

A Real-Time RGB-D Registration and Mapping Approach by Heuristically Switching Between Photometric And Geometric Information

A Real-Time RGB-D Registration and Mapping Approach by Heuristically Switching Between Photometric And Geometric Information A Real-Time RGB-D Registration and Mapping Approach by Heuristically Switching Between Photometric And Geometric Information The 17th International Conference on Information Fusion (Fusion 2014) Khalid

More information

Citation for the original published paper (version of record):

Citation for the original published paper (version of record): http://www.diva-portal.org Preprint This is the submitted version of a paper published in Lecture Notes in Computer Science. Citation for the original published paper (version of record): Fan, Y., Aramrattana,

More information

Graph-based SLAM (Simultaneous Localization And Mapping) for Bridge Inspection Using UAV (Unmanned Aerial Vehicle)

Graph-based SLAM (Simultaneous Localization And Mapping) for Bridge Inspection Using UAV (Unmanned Aerial Vehicle) Graph-based SLAM (Simultaneous Localization And Mapping) for Bridge Inspection Using UAV (Unmanned Aerial Vehicle) Taekjun Oh 1), Sungwook Jung 2), Seungwon Song 3), and Hyun Myung 4) 1), 2), 3), 4) Urban

More information

Robot Localization based on Geo-referenced Images and G raphic Methods

Robot Localization based on Geo-referenced Images and G raphic Methods Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,

More information

Pose Estimation and Control of Micro-Air Vehicles

Pose Estimation and Control of Micro-Air Vehicles Pose Estimation and Control of Micro-Air Vehicles IVAN DRYANOVSKI, Ph.D. Candidate, Computer Science ROBERTO G. VALENTI, Ph.D. Candidate, Electrical Engineering Mentor: JIZHONG XIAO, Professor, Electrical

More information

Simulation of a mobile robot with a LRF in a 2D environment and map building

Simulation of a mobile robot with a LRF in a 2D environment and map building Simulation of a mobile robot with a LRF in a 2D environment and map building Teslić L. 1, Klančar G. 2, and Škrjanc I. 3 1 Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, 1000 Ljubljana,

More information

Ensemble of Bayesian Filters for Loop Closure Detection

Ensemble of Bayesian Filters for Loop Closure Detection Ensemble of Bayesian Filters for Loop Closure Detection Mohammad Omar Salameh, Azizi Abdullah, Shahnorbanun Sahran Pattern Recognition Research Group Center for Artificial Intelligence Faculty of Information

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Omnidirectional vision odometry for low power hardware on flying robots

Omnidirectional vision odometry for low power hardware on flying robots Omnidirectional vision odometry for low power hardware on flying robots Simon Reich 1, Maurice Seer 1, Lars Berscheid 1, Florentin Wörgötter 1, and Jan-Matthias Braun 1 Abstract In this work we will show

More information

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles Proceedings of the International Conference of Control, Dynamic Systems, and Robotics Ottawa, Ontario, Canada, May 15-16 2014 Paper No. 54 A Reactive Bearing Angle Only Obstacle Avoidance Technique for

More information

AMR 2011/2012: Final Projects

AMR 2011/2012: Final Projects AMR 2011/2012: Final Projects 0. General Information A final project includes: studying some literature (typically, 1-2 papers) on a specific subject performing some simulations or numerical tests on an

More information

Camera Pose Measurement from 2D-3D Correspondences of Three Z Shaped Lines

Camera Pose Measurement from 2D-3D Correspondences of Three Z Shaped Lines International Journal of Intelligent Engineering & Systems http://www.inass.org/ Camera Pose Measurement from 2D-3D Correspondences of Three Z Shaped Lines Chang Liu 1,2,3,4, Feng Zhu 1,4, Jinjun Ou 1,4,

More information

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES Jie Shao a, Wuming Zhang a, Yaqiao Zhu b, Aojie Shen a a State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing

More information

A NEW AUTOMATIC SYSTEM CALIBRATION OF MULTI-CAMERAS AND LIDAR SENSORS

A NEW AUTOMATIC SYSTEM CALIBRATION OF MULTI-CAMERAS AND LIDAR SENSORS A NEW AUTOMATIC SYSTEM CALIBRATION OF MULTI-CAMERAS AND LIDAR SENSORS M. Hassanein a, *, A. Moussa a,b, N. El-Sheimy a a Department of Geomatics Engineering, University of Calgary, Calgary, Alberta, Canada

More information

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS Mobile Robotics Mathematics, Models, and Methods Alonzo Kelly Carnegie Mellon University HI Cambridge UNIVERSITY PRESS Contents Preface page xiii 1 Introduction 1 1.1 Applications of Mobile Robots 2 1.2

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

BendIT An Interactive Game with two Robots

BendIT An Interactive Game with two Robots BendIT An Interactive Game with two Robots Tim Niemueller, Stefan Schiffer, Albert Helligrath, Safoura Rezapour Lakani, and Gerhard Lakemeyer Knowledge-based Systems Group RWTH Aachen University, Aachen,

More information

Dense 3D Reconstruction from Autonomous Quadrocopters

Dense 3D Reconstruction from Autonomous Quadrocopters Dense 3D Reconstruction from Autonomous Quadrocopters Computer Science & Mathematics TU Munich Martin Oswald, Jakob Engel, Christian Kerl, Frank Steinbrücker, Jan Stühmer & Jürgen Sturm Autonomous Quadrocopters

More information

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images MECATRONICS - REM 2016 June 15-17, 2016 High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images Shinta Nozaki and Masashi Kimura School of Science and Engineering

More information

AN INDOOR SLAM METHOD BASED ON KINECT AND MULTI-FEATURE EXTENDED INFORMATION FILTER

AN INDOOR SLAM METHOD BASED ON KINECT AND MULTI-FEATURE EXTENDED INFORMATION FILTER AN INDOOR SLAM METHOD BASED ON KINECT AND MULTI-FEATURE EXTENDED INFORMATION FILTER M. Chang a, b, Z. Kang a, a School of Land Science and Technology, China University of Geosciences, 29 Xueyuan Road,

More information

Physical Privacy Interfaces for Remote Presence Systems

Physical Privacy Interfaces for Remote Presence Systems Physical Privacy Interfaces for Remote Presence Systems Student Suzanne Dazo Berea College Berea, KY, USA suzanne_dazo14@berea.edu Authors Mentor Dr. Cindy Grimm Oregon State University Corvallis, OR,

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Final Project Report: Mobile Pick and Place

Final Project Report: Mobile Pick and Place Final Project Report: Mobile Pick and Place Xiaoyang Liu (xiaoyan1) Juncheng Zhang (junchen1) Karthik Ramachandran (kramacha) Sumit Saxena (sumits1) Yihao Qian (yihaoq) Adviser: Dr Matthew Travers Carnegie

More information

PRECEDING VEHICLE TRACKING IN STEREO IMAGES VIA 3D FEATURE MATCHING

PRECEDING VEHICLE TRACKING IN STEREO IMAGES VIA 3D FEATURE MATCHING PRECEDING VEHICLE TRACKING IN STEREO IMAGES VIA 3D FEATURE MATCHING Daniel Weingerl, Wilfried Kubinger, Corinna Engelhardt-Nowitzki UAS Technikum Wien: Department for Advanced Engineering Technologies,

More information

3D-2D Laser Range Finder calibration using a conic based geometry shape

3D-2D Laser Range Finder calibration using a conic based geometry shape 3D-2D Laser Range Finder calibration using a conic based geometry shape Miguel Almeida 1, Paulo Dias 1, Miguel Oliveira 2, Vítor Santos 2 1 Dept. of Electronics, Telecom. and Informatics, IEETA, University

More information

Analyzing the Quality of Matched 3D Point Clouds of Objects

Analyzing the Quality of Matched 3D Point Clouds of Objects Analyzing the Quality of Matched 3D Point Clouds of Objects Igor Bogoslavskyi Cyrill Stachniss Abstract 3D laser scanners are frequently used sensors for mobile robots or autonomous cars and they are often

More information

Removing Moving Objects from Point Cloud Scenes

Removing Moving Objects from Point Cloud Scenes Removing Moving Objects from Point Cloud Scenes Krystof Litomisky and Bir Bhanu University of California, Riverside krystof@litomisky.com, bhanu@ee.ucr.edu Abstract. Three-dimensional simultaneous localization

More information

Egomotion Estimation by Point-Cloud Back-Mapping

Egomotion Estimation by Point-Cloud Back-Mapping Egomotion Estimation by Point-Cloud Back-Mapping Haokun Geng, Radu Nicolescu, and Reinhard Klette Department of Computer Science, University of Auckland, New Zealand hgen001@aucklanduni.ac.nz Abstract.

More information

Matching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment

Matching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment Matching Evaluation of D Laser Scan Points using Observed Probability in Unstable Measurement Environment Taichi Yamada, and Akihisa Ohya Abstract In the real environment such as urban areas sidewalk,

More information

Autonomous Navigation for Flying Robots

Autonomous Navigation for Flying Robots Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 7.2: Visual Odometry Jürgen Sturm Technische Universität München Cascaded Control Robot Trajectory 0.1 Hz Visual

More information

Simultaneous Localization and Mapping (SLAM)

Simultaneous Localization and Mapping (SLAM) Simultaneous Localization and Mapping (SLAM) RSS Lecture 16 April 8, 2013 Prof. Teller Text: Siegwart and Nourbakhsh S. 5.8 SLAM Problem Statement Inputs: No external coordinate reference Time series of

More information

Computer Vision Based General Object Following for GPS-denied Multirotor Unmanned Vehicles

Computer Vision Based General Object Following for GPS-denied Multirotor Unmanned Vehicles Computer Vision Based General Object Following for GPS-denied Multirotor Unmanned Vehicles Jesús Pestana, Jose Luis Sanchez-Lopez, Srikanth Saripalli and Pascual Campoy I. INTRODUCTION This research showcases

More information

Autonomous 3D Reconstruction Using a MAV

Autonomous 3D Reconstruction Using a MAV Autonomous 3D Reconstruction Using a MAV Alexander Popov, Dimitrios Zermas and Nikolaos Papanikolopoulos Abstract An approach is proposed for high resolution 3D reconstruction of an object using a Micro

More information

3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit

3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit 3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit 9 Toyomi Fujita and Yuya Kondo Tohoku Institute of Technology Japan 1. Introduction A 3D configuration and terrain sensing

More information

Deep Neural Network Enhanced VSLAM Landmark Selection

Deep Neural Network Enhanced VSLAM Landmark Selection Deep Neural Network Enhanced VSLAM Landmark Selection Dr. Patrick Benavidez Overview 1 Introduction 2 Background on methods used in VSLAM 3 Proposed Method 4 Testbed 5 Preliminary Results What is VSLAM?

More information

SLAM with SIFT (aka Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks ) Se, Lowe, and Little

SLAM with SIFT (aka Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks ) Se, Lowe, and Little SLAM with SIFT (aka Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks ) Se, Lowe, and Little + Presented by Matt Loper CS296-3: Robot Learning and Autonomy Brown

More information

Tracking and Control of a Small Unmanned Aerial Vehicle using a Ground-Based 3D Laser Scanner

Tracking and Control of a Small Unmanned Aerial Vehicle using a Ground-Based 3D Laser Scanner Tracking and Control of a Small Unmanned Aerial Vehicle using a Ground-Based 3D Laser Scanner Ryan Arya Pratama and Akihisa Ohya Abstract In this work, we describe a method to measure the position and

More information

LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS

LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS 8th International DAAAM Baltic Conference "INDUSTRIAL ENGINEERING - 19-21 April 2012, Tallinn, Estonia LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS Shvarts, D. & Tamre, M. Abstract: The

More information

Autonomous Navigation in Complex Indoor and Outdoor Environments with Micro Aerial Vehicles

Autonomous Navigation in Complex Indoor and Outdoor Environments with Micro Aerial Vehicles Autonomous Navigation in Complex Indoor and Outdoor Environments with Micro Aerial Vehicles Shaojie Shen Dept. of Electrical and Systems Engineering & GRASP Lab, University of Pennsylvania Committee: Daniel

More information

arxiv: v1 [cs.cv] 1 Jan 2019

arxiv: v1 [cs.cv] 1 Jan 2019 Mapping Areas using Computer Vision Algorithms and Drones Bashar Alhafni Saulo Fernando Guedes Lays Cavalcante Ribeiro Juhyun Park Jeongkyu Lee University of Bridgeport. Bridgeport, CT, 06606. United States

More information

Robot Mapping. SLAM Front-Ends. Cyrill Stachniss. Partial image courtesy: Edwin Olson 1

Robot Mapping. SLAM Front-Ends. Cyrill Stachniss. Partial image courtesy: Edwin Olson 1 Robot Mapping SLAM Front-Ends Cyrill Stachniss Partial image courtesy: Edwin Olson 1 Graph-Based SLAM Constraints connect the nodes through odometry and observations Robot pose Constraint 2 Graph-Based

More information

Monocular Visual Odometry

Monocular Visual Odometry Elective in Robotics coordinator: Prof. Giuseppe Oriolo Monocular Visual Odometry (slides prepared by Luca Ricci) Monocular vs. Stereo: eamples from Nature Predator Predators eyes face forward. The field

More information

CSE 527: Introduction to Computer Vision

CSE 527: Introduction to Computer Vision CSE 527: Introduction to Computer Vision Week 10 Class 2: Visual Odometry November 2nd, 2017 Today Visual Odometry Intro Algorithm SLAM Visual Odometry Input Output Images, Video Camera trajectory, motion

More information

Semantic Mapping and Reasoning Approach for Mobile Robotics

Semantic Mapping and Reasoning Approach for Mobile Robotics Semantic Mapping and Reasoning Approach for Mobile Robotics Caner GUNEY, Serdar Bora SAYIN, Murat KENDİR, Turkey Key words: Semantic mapping, 3D mapping, probabilistic, robotic surveying, mine surveying

More information

Calibration of a rotating multi-beam Lidar

Calibration of a rotating multi-beam Lidar The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Calibration of a rotating multi-beam Lidar Naveed Muhammad 1,2 and Simon Lacroix 1,2 Abstract

More information

Application of Vision-based Particle Filter and Visual Odometry for UAV Localization

Application of Vision-based Particle Filter and Visual Odometry for UAV Localization Application of Vision-based Particle Filter and Visual Odometry for UAV Localization Rokas Jurevičius Vilnius University Institute of Mathematics and Informatics Akademijos str. 4 LT-08663 Vilnius, Lithuania

More information

Simultaneous Localization

Simultaneous Localization Simultaneous Localization and Mapping (SLAM) RSS Technical Lecture 16 April 9, 2012 Prof. Teller Text: Siegwart and Nourbakhsh S. 5.8 Navigation Overview Where am I? Where am I going? Localization Assumed

More information

NICP: Dense Normal Based Point Cloud Registration

NICP: Dense Normal Based Point Cloud Registration NICP: Dense Normal Based Point Cloud Registration Jacopo Serafin 1 and Giorgio Grisetti 1 Abstract In this paper we present a novel on-line method to recursively align point clouds. By considering each

More information

Robotics Programming Laboratory

Robotics Programming Laboratory Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car

More information

Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery

Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery 1 Charles TOTH, 1 Dorota BRZEZINSKA, USA 2 Allison KEALY, Australia, 3 Guenther RETSCHER,

More information

Autonomous Navigation for Flying Robots

Autonomous Navigation for Flying Robots Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 3.1: 3D Geometry Jürgen Sturm Technische Universität München Points in 3D 3D point Augmented vector Homogeneous

More information

Robust and Accurate Detection of Object Orientation and ID without Color Segmentation

Robust and Accurate Detection of Object Orientation and ID without Color Segmentation 0 Robust and Accurate Detection of Object Orientation and ID without Color Segmentation Hironobu Fujiyoshi, Tomoyuki Nagahashi and Shoichi Shimizu Chubu University Japan Open Access Database www.i-techonline.com

More information

Indoor Positioning System Based on Distributed Camera Sensor Networks for Mobile Robot

Indoor Positioning System Based on Distributed Camera Sensor Networks for Mobile Robot Indoor Positioning System Based on Distributed Camera Sensor Networks for Mobile Robot Yonghoon Ji 1, Atsushi Yamashita 1, and Hajime Asama 1 School of Engineering, The University of Tokyo, Japan, t{ji,

More information

3D Digitization of Human Foot Based on Computer Stereo Vision Combined with KINECT Sensor Hai-Qing YANG a,*, Li HE b, Geng-Xin GUO c and Yong-Jun XU d

3D Digitization of Human Foot Based on Computer Stereo Vision Combined with KINECT Sensor Hai-Qing YANG a,*, Li HE b, Geng-Xin GUO c and Yong-Jun XU d 2017 International Conference on Mechanical Engineering and Control Automation (ICMECA 2017) ISBN: 978-1-60595-449-3 3D Digitization of Human Foot Based on Computer Stereo Vision Combined with KINECT Sensor

More information

Multi Channel Generalized-ICP

Multi Channel Generalized-ICP Multi Channel Generalized-ICP James Servos and Steven L. Waslander University of Waterloo, Waterloo, ON, Canada, N2L 3G1 Abstract Current state of the art scan registration algorithms which use only positional

More information

Robotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM)

Robotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM) Robotics Lecture 7: Simultaneous Localisation and Mapping (SLAM) See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College

More information

THE POSITION AND ORIENTATION MEASUREMENT OF GONDOLA USING A VISUAL CAMERA

THE POSITION AND ORIENTATION MEASUREMENT OF GONDOLA USING A VISUAL CAMERA THE POSITION AND ORIENTATION MEASUREMENT OF GONDOLA USING A VISUAL CAMERA Hwadong Sun 1, Dong Yeop Kim 1 *, Joon Ho Kwon 2, Bong-Seok Kim 1, and Chang-Woo Park 1 1 Intelligent Robotics Research Center,

More information

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion 007 IEEE International Conference on Robotics and Automation Roma, Italy, 0-4 April 007 FrE5. Accurate Motion Estimation and High-Precision D Reconstruction by Sensor Fusion Yunsu Bok, Youngbae Hwang,

More information

Gregory Bock, Brittany Dhall, Ryan Hendrickson, & Jared Lamkin Project Advisors: Dr. Jing Wang & Dr. In Soo Ahn Department of Electrical and Computer

Gregory Bock, Brittany Dhall, Ryan Hendrickson, & Jared Lamkin Project Advisors: Dr. Jing Wang & Dr. In Soo Ahn Department of Electrical and Computer Gregory Bock, Brittany Dhall, Ryan Hendrickson, & Jared Lamkin Project Advisors: Dr. Jing Wang & Dr. In Soo Ahn Department of Electrical and Computer Engineering April 26 th, 2016 Outline 2 I. Introduction

More information

Fast Natural Feature Tracking for Mobile Augmented Reality Applications

Fast Natural Feature Tracking for Mobile Augmented Reality Applications Fast Natural Feature Tracking for Mobile Augmented Reality Applications Jong-Seung Park 1, Byeong-Jo Bae 2, and Ramesh Jain 3 1 Dept. of Computer Science & Eng., University of Incheon, Korea 2 Hyundai

More information

The main problem of photogrammetry

The main problem of photogrammetry Structured Light Structured Light The main problem of photogrammetry to recover shape from multiple views of a scene, we need to find correspondences between the images the matching/correspondence problem

More information

Subpixel Corner Detection Using Spatial Moment 1)

Subpixel Corner Detection Using Spatial Moment 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 25 Subpixel Corner Detection Using Spatial Moment 1) WANG She-Yang SONG Shen-Min QIANG Wen-Yi CHEN Xing-Lin (Department of Control Engineering, Harbin Institute

More information

LOAM: LiDAR Odometry and Mapping in Real Time

LOAM: LiDAR Odometry and Mapping in Real Time LOAM: LiDAR Odometry and Mapping in Real Time Aayush Dwivedi (14006), Akshay Sharma (14062), Mandeep Singh (14363) Indian Institute of Technology Kanpur 1 Abstract This project deals with online simultaneous

More information

UAV Position and Attitude Sensoring in Indoor Environment Using Cameras

UAV Position and Attitude Sensoring in Indoor Environment Using Cameras UAV Position and Attitude Sensoring in Indoor Environment Using Cameras 1 Peng Xu Abstract There are great advantages of indoor experiment for UAVs. Test flights of UAV in laboratory is more convenient,

More information

Accurate Figure Flying with a Quadrocopter Using Onboard Visual and Inertial Sensing

Accurate Figure Flying with a Quadrocopter Using Onboard Visual and Inertial Sensing Accurate Figure Flying with a Quadrocopter Using Onboard Visual and Inertial Sensing Jakob Engel, Jürgen Sturm, Daniel Cremers Abstract We present an approach that enables a low-cost quadrocopter to accurately

More information

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES An Undergraduate Research Scholars Thesis by RUI LIU Submitted to Honors and Undergraduate Research Texas A&M University in partial fulfillment

More information

Visual SLAM for small Unmanned Aerial Vehicles

Visual SLAM for small Unmanned Aerial Vehicles Visual SLAM for small Unmanned Aerial Vehicles Margarita Chli Autonomous Systems Lab, ETH Zurich Simultaneous Localization And Mapping How can a body navigate in a previously unknown environment while

More information

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press,   ISSN ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information

More information

EKF Localization and EKF SLAM incorporating prior information

EKF Localization and EKF SLAM incorporating prior information EKF Localization and EKF SLAM incorporating prior information Final Report ME- Samuel Castaneda ID: 113155 1. Abstract In the context of mobile robotics, before any motion planning or navigation algorithm

More information

Real Time Face Detection using Geometric Constraints, Navigation and Depth-based Skin Segmentation on Mobile Robots

Real Time Face Detection using Geometric Constraints, Navigation and Depth-based Skin Segmentation on Mobile Robots Real Time Face Detection using Geometric Constraints, Navigation and Depth-based Skin Segmentation on Mobile Robots Vo Duc My Cognitive Systems Group Computer Science Department University of Tuebingen

More information

Autonomous Landing of an Unmanned Aerial Vehicle

Autonomous Landing of an Unmanned Aerial Vehicle Autonomous Landing of an Unmanned Aerial Vehicle Joel Hermansson, Andreas Gising Cybaero AB SE-581 12 Linköping, Sweden Email: {joel.hermansson, andreas.gising}@cybaero.se Martin Skoglund and Thomas B.

More information

Application of LSD-SLAM for Visualization Temperature in Wide-area Environment

Application of LSD-SLAM for Visualization Temperature in Wide-area Environment Application of LSD-SLAM for Visualization Temperature in Wide-area Environment Masahiro Yamaguchi 1, Hideo Saito 1 and Shoji Yachida 2 1 Graduate School of Science and Technology, Keio University, 3-14-1

More information

Modeling, Parameter Estimation, and Navigation of Indoor Quadrotor Robots

Modeling, Parameter Estimation, and Navigation of Indoor Quadrotor Robots Brigham Young University BYU ScholarsArchive All Theses and Dissertations 2013-04-29 Modeling, Parameter Estimation, and Navigation of Indoor Quadrotor Robots Stephen C. Quebe Brigham Young University

More information

Building Reliable 2D Maps from 3D Features

Building Reliable 2D Maps from 3D Features Building Reliable 2D Maps from 3D Features Dipl. Technoinform. Jens Wettach, Prof. Dr. rer. nat. Karsten Berns TU Kaiserslautern; Robotics Research Lab 1, Geb. 48; Gottlieb-Daimler- Str.1; 67663 Kaiserslautern;

More information

IMPROVING QUADROTOR 3-AXES STABILIZATION RESULTS USING EMPIRICAL RESULTS AND SYSTEM IDENTIFICATION

IMPROVING QUADROTOR 3-AXES STABILIZATION RESULTS USING EMPIRICAL RESULTS AND SYSTEM IDENTIFICATION IMPROVING QUADROTOR 3-AXES STABILIZATION RESULTS USING EMPIRICAL RESULTS AND SYSTEM IDENTIFICATION Övünç Elbir & Electronics Eng. oelbir@etu.edu.tr Anıl Ufuk Batmaz & Electronics Eng. aubatmaz@etu.edu.tr

More information

Visual Pose Estimation System for Autonomous Rendezvous of Spacecraft

Visual Pose Estimation System for Autonomous Rendezvous of Spacecraft Visual Pose Estimation System for Autonomous Rendezvous of Spacecraft Mark A. Post1, Junquan Li2, and Craig Clark2 Space Mechatronic Systems Technology Laboratory Dept. of Design, Manufacture & Engineering

More information

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Simon Thompson and Satoshi Kagami Digital Human Research Center National Institute of Advanced

More information

Filtering and mapping systems for underwater 3D imaging sonar

Filtering and mapping systems for underwater 3D imaging sonar Filtering and mapping systems for underwater 3D imaging sonar Tomohiro Koshikawa 1, a, Shin Kato 1,b, and Hitoshi Arisumi 1,c 1 Field Robotics Research Group, National Institute of Advanced Industrial

More information

Reconstruction of complete 3D object model from multi-view range images.

Reconstruction of complete 3D object model from multi-view range images. Header for SPIE use Reconstruction of complete 3D object model from multi-view range images. Yi-Ping Hung *, Chu-Song Chen, Ing-Bor Hsieh, Chiou-Shann Fuh Institute of Information Science, Academia Sinica,

More information

ROS-Industrial Basic Developer s Training Class

ROS-Industrial Basic Developer s Training Class ROS-Industrial Basic Developer s Training Class Southwest Research Institute 1 Session 4: More Advanced Topics (Descartes and Perception) Southwest Research Institute 2 MOVEIT! CONTINUED 3 Motion Planning

More information

Three-dimensional Underwater Environment Reconstruction with Graph Optimization Using Acoustic Camera

Three-dimensional Underwater Environment Reconstruction with Graph Optimization Using Acoustic Camera Three-dimensional Underwater Environment Reconstruction with Graph Optimization Using Acoustic Camera Yusheng Wang 1, Yonghoon Ji 2, Hanwool Woo 1, Yusuke Tamura 1, Atsushi Yamashita 1, and Hajime Asama

More information

Visual-Inertial Localization and Mapping for Robot Navigation

Visual-Inertial Localization and Mapping for Robot Navigation Visual-Inertial Localization and Mapping for Robot Navigation Dr. Guillermo Gallego Robotics & Perception Group University of Zurich Davide Scaramuzza University of Zurich - http://rpg.ifi.uzh.ch Mocular,

More information

IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 2, NO. 1, JANUARY Robust and Accurate Monocular Visual Navigation Combining IMU for a Quadrotor

IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 2, NO. 1, JANUARY Robust and Accurate Monocular Visual Navigation Combining IMU for a Quadrotor IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 2, NO. 1, JANUARY 2015 33 Robust and Accurate Monocular Visual Navigation Combining IMU for a Quadrotor Wei Zheng, Fan Zhou, and Zengfu Wang Abstract In this

More information

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech Visual Odometry Features, Tracking, Essential Matrix, and RANSAC Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline The

More information

An Angle Estimation to Landmarks for Autonomous Satellite Navigation

An Angle Estimation to Landmarks for Autonomous Satellite Navigation 5th International Conference on Environment, Materials, Chemistry and Power Electronics (EMCPE 2016) An Angle Estimation to Landmarks for Autonomous Satellite Navigation Qing XUE a, Hongwen YANG, Jian

More information

Non-symmetric membership function for Fuzzy-based visual servoing onboard a UAV

Non-symmetric membership function for Fuzzy-based visual servoing onboard a UAV 1 Non-symmetric membership function for Fuzzy-based visual servoing onboard a UAV M. A. Olivares-Méndez and P. Campoy and C. Martínez and I. F. Mondragón B. Computer Vision Group, DISAM, Universidad Politécnica

More information

Collecting outdoor datasets for benchmarking vision based robot localization

Collecting outdoor datasets for benchmarking vision based robot localization Collecting outdoor datasets for benchmarking vision based robot localization Emanuele Frontoni*, Andrea Ascani, Adriano Mancini, Primo Zingaretti Department of Ingegneria Infromatica, Gestionale e dell

More information

AAM Based Facial Feature Tracking with Kinect

AAM Based Facial Feature Tracking with Kinect BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 15, No 3 Sofia 2015 Print ISSN: 1311-9702; Online ISSN: 1314-4081 DOI: 10.1515/cait-2015-0046 AAM Based Facial Feature Tracking

More information

Unmanned Vehicle Technology Researches for Outdoor Environments. *Ju-Jang Lee 1)

Unmanned Vehicle Technology Researches for Outdoor Environments. *Ju-Jang Lee 1) Keynote Paper Unmanned Vehicle Technology Researches for Outdoor Environments *Ju-Jang Lee 1) 1) Department of Electrical Engineering, KAIST, Daejeon 305-701, Korea 1) jjlee@ee.kaist.ac.kr ABSTRACT The

More information

Exploitation of GPS-Control Points in low-contrast IR-imagery for homography estimation

Exploitation of GPS-Control Points in low-contrast IR-imagery for homography estimation Exploitation of GPS-Control Points in low-contrast IR-imagery for homography estimation Patrick Dunau 1 Fraunhofer-Institute, of Optronics, Image Exploitation and System Technologies (IOSB), Gutleuthausstr.

More information

Computer Vision I - Appearance-based Matching and Projective Geometry

Computer Vision I - Appearance-based Matching and Projective Geometry Computer Vision I - Appearance-based Matching and Projective Geometry Carsten Rother 05/11/2015 Computer Vision I: Image Formation Process Roadmap for next four lectures Computer Vision I: Image Formation

More information

Simultaneous Localization and Mapping

Simultaneous Localization and Mapping Sebastian Lembcke SLAM 1 / 29 MIN Faculty Department of Informatics Simultaneous Localization and Mapping Visual Loop-Closure Detection University of Hamburg Faculty of Mathematics, Informatics and Natural

More information

High Altitude Balloon Localization from Photographs

High Altitude Balloon Localization from Photographs High Altitude Balloon Localization from Photographs Paul Norman and Daniel Bowman Bovine Aerospace August 27, 2013 Introduction On December 24, 2011, we launched a high altitude balloon equipped with a

More information