Terminal Phase Vision-Based Target Recognition and 3D Pose Estimation for a Tail-Sitter, Vertical Takeoff and Landing Unmanned Air Vehicle
|
|
- Ophelia Wade
- 6 years ago
- Views:
Transcription
1 Terminal Phase Vision-Based Target Recognition and 3D Pose Estimation for a Tail-Sitter, Vertical Takeoff and Landing Unmanned Air Vehicle Allen C. Tsai, Peter W. Gibbens, and R. Hugh Stone School of Aerospace, Mechanical and Mechatronic Engineering, University of Sydney, N.S.W., 006, Australia {allen.tsai, pwg, hstone}@aeromech.usyd.edu.au Abstract. This paper presents an approach to accurately identify landing targets and obtain 3D pose estimates for vertical takeoff and landing unmanned air vehicles via computer vision methods. The objective of this paper is to detect and recognize a pre-known landing target and from that landing target obtain the 3D attitude information of the flight vehicle with respect to the landing target using a single image. The Hu s invariant moments theorem is used for target identification and parallel lines of the target shape are investigated to obtain the flight vehicle orientation. Testing of the proposed methods is carried out on flight images obtained from a camera onboard a tail-sitter, vertical takeoff and landing unmanned air vehicle. Keywords: Tail-sitter vertical takeoff and landing unmanned air vehicle, computer vision, moment invariants, vision-based pose/attitude estimation, target identification/detection, parallel lines, vanishing points, perspective transformation and vision-based autonomous landing. 1 Introduction Using cameras as sensors to perform autonomous navigation, guidance and control for either land or flight vehicles via computer vision has been the latest developments in the area of field robotics. This paper focuses on how a tail-sitter Vertical Takeoff and Landing (V.T.O.L.) Unmanned Air Vehicles (U.A.V.) can use visual cues to aid in navigation or guidance, focusing on the 3D attitude estimation of the vehicle especially during the terminal phase: landing; where an accurate tilt angle estimate of the vehicle and position offset from the landing target is required to achieve safe vertical landing. Work in this particular field focusing on state estimation to aid in landing for V.T.O.L. U.A.V. has been done by a number of research groups around the world. Work of particular interest is the U.S.C. A.V.A.T.A.R. project[1]; landing in unstructured 3D environment using vision information has been reported but only the heading angle of the helicopter could be obtained from those vision information. Work done by University of California, Berkeley[] focused on ego-motion estimation where at least two or more images are L.-W. Chang, W.-N. Lie, and R. Chiang (Eds.): PSIVT 006, LNCS 4319, pp , 006. Springer-Verlag Berlin Heidelberg 006
2 Terminal Phase Vision-Based Target Recognition and 3D Pose Estimation 673 required to determine the 3D pose of a V.T.O.L. U.A.V. Yang and Tsai[3] have looked at position and attitude determination of a helicopter undergoing 3D motion using a single image, but did not present a strategy for target identification. Amidi and Miller [4] used a visual odometer but the visual data could not be used as stand alone information to provide state estimation in 3D space to achieve landing. This paper will depict how from a single image, target identification is achieved as a V.T.O.L. U.A.V. undergoes 3D motion during hover, just before landing, as well as being able to ascertain 3D pose from the appearance of the landing target which is subject to perspective transformation. The 3D attitudes of the vehicle are determined by investigating parallel lines of the landing target. Target identification is achieved through the calculations of Hu s invariant moments of the target. Paper Outline: Section will discuss in detail the image processing techniques undertaken, set up of the video camera on the T-Wing and the design of the landing pad; section 3 looks into the mathematical Hu s invariant moments theorem applied in order to detect and recognize the landing target, section 4 presents the mathematical techniques used to carry out 3D pose estimation, and section 5 presents and discusses the results for the strategies proposed tested on flight images taken from a U.A.V.; and lastly for Section 6, conclusion and directions for future work are drawn. Vision Algorithm The idea of the vision algorithm is to accurately maintain focus on an object of interest, which is the marking on the landing pad (here on after will be referred to as the landing target), by eliminating objects that are not of interest. The elimination of unwanted objects is achieved by a series of transformation from color to a binary image, filtering and image segmentations. In this section the image acquisition hardware set-up and the set up of the landing pad is firstly introduced and following that is the details of the vision algorithm..1 Image Acquisition Hardware Set-Up The capital block letter T is used as the marking on the landing pad to make up the landing target. The idea behind using a T is that it is of only one axis of symmetry whereas the more conventional helicopter landing pads, either a circle or the capital block letter of H, have more than one axis of symmetry. The one axis symmetry will allow uniqueness and robustness when estimating the full 3D attitudes and/or positions of the flight vehicle relative to Ground Coordinate System which is attached to the landing target. The camera used was a C.C.T.V. camera, the imaging sensor was of a 1/3 Panasonic Color CCD type. The resolution is of 737 horizontal by 575 vertical pixels with a field of view of 95º by 59º and records at 5 Hz. Recording of the images of the on-board camera was achieved by transmission using a.4ghz wireless four channel transmitter to a laptop computer where a receiver is connected. The camera was calibrated using online calibration toolbox [5]. The test bed used for the flights was
3 674 A.C. Tsai, P.W. Gibbens, and R.H. Stone the T-Wing [6] tail-sitter V.T.O.L. U.A.V. which is currently under research and development at the University of Sydney. The set up of the camera on the flight vehicle is shown on the following simulation drawing and photos: Fig. 1. Simulation drawing and photos of the camera set-up on the T-Wing. Image Processing Algorithm The low level image processing for the task of target identification and pose estimation requires a transformation from the color image to a gray-scale version to be carried out first. This basically is the elimination of the hue and saturation information of the RGB image while maintaining the luminance information. Once the transformation from color to gray-scale is achieved, image restoration, i.e. the elimination of noise, is achieved through a nonlinear spatial filter, median filter, of mask size [5 5] is applied twice[7] over the image. Median filters are known to have low pass characteristics to remove the white noise while still maintaining the edge sharpness, which is critical in extracting good edges as accuracy of attitude estimation is dependent on the outcome of the line detection. After the white noise is removed, a transformation from gray-scale to binary image is required by thresholding at 30% from the maximum intensity value, in an effort to identify the object of interest: the landing target, which is marked out in white. Despite that, the fact of sun s presence and other high reflectance objects that lie around the field where the experiment was carried out; it is not always possible to eliminate other white regions through this transformation. Image segmentation and connected component labeling was carried out on the images in an attempt to get rid of the objects left over from gray-scale to binary transformation that are of no interest. These objects are left out by determining that their area is either too big (reflectance from the ground due to sunlight) or too small (objects such as other markings). The critical area values for omitting objects are calculated based on the likely altitude and attitudes of the flight vehicle once in hover, which is normally in between two to five meters and ±10º. Given that the marking T is of known area and geometry the objects to be kept in interest should have an area within the range 1500 to 15,000 pixels after accounting for perspective transformation. This normally leaves two or more objects still present in the image, which was the case for about 70% of the frames captured. The Hu s Invariant Moments theorem is then applied as a target identification process in an attempt to pick out the correct landing target. Once the landing target is determined the next stage is to determine the edges of the block letter T in order to identify the parallel lines which are to be used later on for attitude estimation. The edge detection is carried out via the Canny edge detector [8];
4 Terminal Phase Vision-Based Target Recognition and 3D Pose Estimation 675 the line detection is carried out using the Hough transform [8]. The following figure shows the stages of image processing: Fig.. Image processing stages (from left to right and down): 1 st the grayscale transformation, median filtering, binary transformation, rejection of small objects, rejections of large objects, and finally target identification with line detection 3 Target Identification The landing target identification procedure is accomplished by using the geometric properties of the target. This involves investigating the moment of inertia of the target shape. Hu s invariant moments[9] are known to be invariant under translation, rotation and scaling in the D plane. This feature is very well suited to tasks associated with identifying landing targets. The following equation represents the moments of a D discrete function: m pq = i j p q i j I ( i, j) where (p + q) represent the order of the moments and I(i,j) is the intensity of an image. The indexes i, j corresponds to the image plane coordinate axes x and y indexes respectively. The central moments are the moments of inertia defined about the centre of gravity and are given by the following equation: μ pq = i j p q ( i x ) ( j y ) I ( i, j ) where the indexes have the same meaning as in equation (1), x and y are the centroids of the target shape. (1) ()
5 676 A.C. Tsai, P.W. Gibbens, and R.H. Stone The normalized central moment is defined as follows: η μ μ pq γ 00 pq = (3) Where γ = (p + q) /, for p + q =, 3,... The first four orders of invariant moments can be determined by the normalized central moments, they are as follows: φ + 1 = η 0 η 0 ( η 0 η 0 ) 11 φ = + 4η φ = ( η 3η ) + (3η η ) φ 4 = ( η 30 + η 1 ) + ( η 1 + η 03 ) In this paper all these four orders of invariant moments were tracked to carry out identification of the landing target allowing for perspective distortion. An object is considered to be the target if the sum of errors for all four orders is the minimum of all the objects. As the tail-sitter V.T.O.L. U.A.V. approach the landing phase, the flight vehicle will always undergo 3D motion, the invariant moment method described above is only known to be invariant under D scaling, translation and rotation. Therefore it is critical to investigate all facets of the invariant moments. As proven by Sivaramakrishna and Shashidharf [10], it is possible to identify objects of interest from even fairly similar shapes even if they undergo perspective transformation by tracking the higher order moments as well as the lower order ones. This method is applied later on to determine the correct target, i.e. target identification. (4) 4 State Estimation Due to the inherent instability with V.T.O.L. U.A.V. near the ground during landing, it is necessary to be able to accurately determine the 3D positions and 3D attitudes of the flight vehicle relative to the landing target in order to carry out high performance landing. The information given by the parallel lines of the target is one of vision-based navigation techniques used to obtain high integrity estimation of the 3D attitudes of the flight vehicle. Because of the landing pad marking: T, there are two sets of nominally orthogonal parallel lines that are existent, the two sets of orthogonal parallel lines each containing only two parallel lines are named A and B. The parallel lines in set A and set B correspond to the horizontal and vertical arm of the T respectively. 4.1 Coordinate Axes Transformation Before establishing the relationship of parallel lines to determine the vehicle attitude, the transformations between several coordinate systems need to be ascertained. The first coordinate system to be defined is the image plane denoted as the I.C.S. (Image Coordinate System), and then the camera coordinate system (C.C.S.) which
6 Terminal Phase Vision-Based Target Recognition and 3D Pose Estimation 677 represents the camera mounted on the flight vehicle. Lastly the flight vehicle body axes denoted as the vehicle coordinate system (V.C.S.). The line directions of the three flight vehicle body axes in the C.C.S. are pre-known and denoted as d normal, d longitudinal and d lateral. To describe the relative orientation between the landing target and the flight platform, the global coordinate system (G.C.S.) is also required. The axes of the G.C.S. are denoted as X, Y and Z. The X axis is parallel with the horizontal bar of the block letter T and pointing to the T s right; the Y axis is parallel with the vertical bar of the T and a positive down direction. These two axes are situated at the centre point where the vertical and horizontal bars meet rather than the conventional centre of gravity of the shape. The Z axis is therefore pointing up according to the right hand rule. The transformation from the C.C.S. to the V.C.S. amounts to an axis re-definition according to: ~ X ~ V. C. S. = X C. C. S. (5) 4. Flight Vehicle Orientation Estimation Via Parallel Lines Information Moving onto the application of parallel lines of the target shape to deduce 3D attitudes of the flight vehicle; it is well known that a set of 3D parallel lines intersect at a vanishing point on an image plane due to perspective transformation. The vanishing point is a property that indicates the 3D line direction of the set of parallel lines [11]. Considering a 3D line L represented by a set of points: L = {(x, y, z) (x, y, z) = (p 1, p, p 3 ) + λ(d 1, d, d 3 ) for real λ} (6) The line L passes through the point (p 1, p, p 3 ) and has the line directions (d 1, d, d 3 ). The image plane point of a 3D point on the line L can be written as: (u, v) = (f x/z, f y/z) (7) = [f (p 1 + λd 1 )/(p 3 + λd 3 ), f (p + λd )/(p 3 + λd 3 )] where f is the focal length of the camera and λ is the line length or parameter. A vanishing point (u, v ) can be detected if λ and d 3 0; that is: (u, v ) = [f d 1 /d 3, f d /d 3 ] (8) The direction of the line L can now be uniquely determined from the above equation to be: (d 1, d, d 3 ) = ( u, v, f )/ u + v + f (9) With the above theory applied to the problem of pose estimation; the line directions of the X axis, which corresponds to the horizontal bar of the T, and the Y axis, corresponding to the vertical bar of the T, in the C.C.S. can firstly be determined. The line directions are denoted as d x, d y and d z, where the line direction of d z is determined by the cross product of d x and d y.
7 678 A.C. Tsai, P.W. Gibbens, and R.H. Stone Once the directions of the G.C.S. axes in the C.C.S. are determined the flight platform s attitudes with respect to the landing target can be determined via the following, which takes into account the transformation from C.C.S. to V.C.S.: cos α = (d x d Long. )/( d x d Long. ), cos β = (d y d Lat. )/( d y d Lat. ), (10) cos γ = (d z d Normal. )/( d z d Normal. ). d x, d y and d z are the direction cosines of the G.C.S. axes in the C.C.S. α, β, and γ are the angles of the G.C.S. axes in the V.C.S., which correspond to the roll, pitch and yaw angles of the flight vehicle respectively. The following figure shows the relations between the C.C.S., G.C.S. and I.C.S., and the line direction of the vanishing point in C.C.S. Fig. 3. Schematic diagram of the relations between I.C.S., C.C.S., and G.C.S., and the line direction of G.C.S. Y axis in the C.C.S. 5 Experimental Results Flight images taken during the flight testing of the T-Wing, a tail-sitter V.T.O.L. U.A.V., were used to test the accuracy, repeatability and computational efficiency of the fore mentioned theories of target recognition and 3D attitude estimation. Flight images are extracted from a 160 seconds period of the flight vehicle hovering over the target with the target always kept insight of the camera viewing angles. Figure 4 shows the three angles between the V.C.S. and the G.C.S. as the flight platform pitches, yaws and rolls during the hover.
8 Terminal Phase Vision-Based Target Recognition and 3D Pose Estimation 679 Fig. 4. Plots of the Alpha, Beta and Gamma angles ascertained from vanishing points and the respective roll, pitch and yaw angles of the vehicle The estimated attitudes were compared with filtered estimates from a NovAtel RTK G.P.S. unit, accuracy of down to cm and a Honeywell Ring-Laser Gyro. of 0.5º accuracy. The errors from the attitude estimates from the parallel line information compared favorably with R.M.S. errors of 4.8º in alpha, 4.º in beta and 4.6º in gamma. This range of error is deemed good by other work s standard[1]. Figure 5 shows the computed 1 st, nd, 3 rd and 4 th order invariant moments of the landing target, T, from the images captured during the vehicle hover. Knowing that the invariant moments are only invariant under D translation, scaling and rotation, all four orders of the invariant moments were tracked. The results show that the error of the invariants moments computed as the flight vehicle hovers above the target are larger than the errors associated with D motion; but by tracking of the first four orders of invariant moments, the target always had the smallest total normalized error (sum of all four percentage discrepancy to true value obtained from noiseless images divided by four) than other objects remaining in the images. The first four orders of invariant moments shows that the normalized errors were greater for the period where the vehicle undergoes greater pitching and yawing moments than periods where the vehicle is almost in a perfect vertical hover mode. There were images where noise was a big issue but nevertheless by tracking all the four orders of invariant moments, the landing target could still be distinguished. With regard to computational time, the filtering and thresholding of the images took approximately 11.9% of the time; component labeling and segmentation took around about 54.99%; attitude estimation algorithm needed 4.933% and the invariant moments calculations required 8.16% of the computational time when dealing with two objects.
9 680 A.C. Tsai, P.W. Gibbens, and R.H. Stone Fig. 5. The computed 1 st, nd, 3 rd and 4 th order invariant moments (in red --) compared with true values (in blue -); and the total normalized error 6 Conclusion In this paper, an algorithm to accurately identify landing targets and from those landing targets, using computer vision techniques, to obtain attitudes estimates of a tail-sitter V.T.O.L. U.A.V. undergoing 3D motion during the hover phase is presented. This method of 3D attitude estimation requires only a single image which is a more computationally effective technique than motion analysis which requires processing of two or more images. This paper also presented techniques of accurately determining landing targets via the invariant moments theorem while an air vehicle is undergoing 3D motion. The results show a good accuracy with target detection and pose estimation with previous other work in this field. Further developments of these techniques in the future can possibly see autonomous landing of manned helicopter onto a helipad and commercial aircrafts autonomously detecting runways during landing. A major issue requiring investigation is the estimation of attitude when the landing target is only partially visible in the image plane. We intend in the future to integrate the attitude, position and velocities estimates, to act as guidance information, with the control of the T-Wing especially during landing.
10 Terminal Phase Vision-Based Target Recognition and 3D Pose Estimation 681 Acknowledgements. J. Roberts for his technical expertise with setting up of the camera system and the wireless transmission for recordings. References [1] S. Saripalli, J. F. Montgomery, and G. S. Sukhatme, "Vision-based autonomous landing of an unmanned aerial vehicle.", IEEE International Conference on Robotics and Automation, ICRA'0 Proceedings, Washington, DC, 00. [] O. Shakernia, R. Vidal, C. S. Sharp, Y. Ma, and S. Sastry, "Multiple view motion estimation and control for landing an unmanned aerial vehicle.", IEEE International Conference on Robotics and Automation, ICRA'0 Proceedings, Washington, DC, 00. [3] Z. F. Yang and W. H. Tsai, "Using parallel line information for vision-based landmark location estimation and an application to automatic helicopter landing," Robotics and Computer-Integrated Manufacturing, vol. 14, pp , [4] T. K. O. Amidi, and J.R. Miller, "Vision-Based Autonomous Helicopter Research at Carnegie Mellon Robotics Institute ", American Helicopter Society International Conference, Heli, Japan, [5] Z. Zhang, "Flexible Camera Calibration by viewing a plane from unknown orientation.", International Conference on Computer Vision (ICCV'99), Corfu, Greece, pp , September 1999 [6] R. H. Stone, "Configuration Design of a Canard Configuration Tail Sitter Unmanned Air Vehicle Using Multidisciplinary Optimization." PhD Thesis, University of Sydney, Australia, [7] R. C. Gonzalez, R. E. Woods, and S. L. Eddins, Digital Image Processing Using MATLAB: Pearson Prentice Hall, Upper Saddle River, N.J., 004. [8] R. C. Gonzalez and R. E. Woods, Digital Image Processing, ed., Pearson Prentice Hall, Upper Saddle River, N.J., 00. [9] M. Hu, "Visual Pattern Recognition by Moment Invariants.", IRE Transactions on Information Theory, 196. [10] R. Sivaramakrishna and N. S. Shashidharf, "Hu's moment invariants: how invariant are they under skew and perspective transformations?", IEEE WESCANEX 97: Communications, Power and Computing. Conference Proceedings, [11] R. M. Haralick and L. G. Shapiro, Computer and Robot Vision, vol. II: Addison-Wesley, [1] C. S. Sharp, O. Shakernia, and S. S. Sastry, "A vision system for landing an unmanned aerial vehicle.", IEEE International Conference on Robotics and Automation, ICRA '01 Proceedings., 001.
An Experimental Study of the Autonomous Helicopter Landing Problem
An Experimental Study of the Autonomous Helicopter Landing Problem Srikanth Saripalli 1, Gaurav S. Sukhatme 1, and James F. Montgomery 2 1 Department of Computer Science, University of Southern California,
More informationAutonomous Landing of an Unmanned Aerial Vehicle
Autonomous Landing of an Unmanned Aerial Vehicle Joel Hermansson, Andreas Gising Cybaero AB SE-581 12 Linköping, Sweden Email: {joel.hermansson, andreas.gising}@cybaero.se Martin Skoglund and Thomas B.
More informationState Space System Modeling of a Quad Copter UAV
Indian Journal of Science Technology, Vol 9(27), DOI: 10.17485/ijst/2016/v9i27/95239, July 2016 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 State Space System Modeling of a Quad Copter UAV Zaid
More informationUAV Position and Attitude Sensoring in Indoor Environment Using Cameras
UAV Position and Attitude Sensoring in Indoor Environment Using Cameras 1 Peng Xu Abstract There are great advantages of indoor experiment for UAVs. Test flights of UAV in laboratory is more convenient,
More informationIntroducing Robotics Vision System to a Manufacturing Robotics Course
Paper ID #16241 Introducing Robotics Vision System to a Manufacturing Robotics Course Dr. Yuqiu You, Ohio University c American Society for Engineering Education, 2016 Introducing Robotics Vision System
More informationDesign and Development of Unmanned Tilt T-Tri Rotor Aerial Vehicle
Design and Development of Unmanned Tilt T-Tri Rotor Aerial Vehicle K. Senthil Kumar, Mohammad Rasheed, and T.Anand Abstract Helicopter offers the capability of hover, slow forward movement, vertical take-off
More informationSubpixel Corner Detection Using Spatial Moment 1)
Vol.31, No.5 ACTA AUTOMATICA SINICA September, 25 Subpixel Corner Detection Using Spatial Moment 1) WANG She-Yang SONG Shen-Min QIANG Wen-Yi CHEN Xing-Lin (Department of Control Engineering, Harbin Institute
More informationLanding a UAV on a Runway Using Image Registration
28 IEEE International Conference on Robotics and Automation Pasadena, CA, USA, May 9-23, 28 Landing a UAV on a Runway Using Image Registration Andrew Miller and Mubarak Shah and Don Harper University of
More informationA Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles
Proceedings of the International Conference of Control, Dynamic Systems, and Robotics Ottawa, Ontario, Canada, May 15-16 2014 Paper No. 54 A Reactive Bearing Angle Only Obstacle Avoidance Technique for
More informationDynamical Modeling and Controlof Quadrotor
Dynamical Modeling and Controlof Quadrotor Faizan Shahid NUST PNEC Pakistan engr.faizan_shahid@hotmail.com Muhammad Bilal Kadri, Nasir Aziz Jumani, Zaid Pirwani PAF KIET Pakistan bilal.kadri@pafkiet.edu.pk
More informationUsing parallel line information for vision-based landmark location estimation and an application to automatic helicopter landing
Robotics and Computer-Integrated Manufacturing 14 (1998) 297 306 Using parallel line information for vision-based landmark location estimation and an application to automatic helicopter landing Zhi-Fang
More informationEE 584 MACHINE VISION
EE 584 MACHINE VISION Binary Images Analysis Geometrical & Topological Properties Connectedness Binary Algorithms Morphology Binary Images Binary (two-valued; black/white) images gives better efficiency
More informationVisual Servoing for Tracking Features in Urban Areas Using an Autonomous Helicopter
Visual Servoing for Tracking Features in Urban Areas Using an Autonomous Helicopter Abstract The use of Unmanned Aerial Vehicles (UAVs) in civilian and domestic applications is highly demanding, requiring
More informationVisually-Guided Landing of an Unmanned Aerial Vehicle
1 Visually-Guided Landing of an Unmanned Aerial Vehicle Srikanth Saripalli, Student Member, IEEE, James F. Montgomery, and Gaurav S. Sukhatme, Member, IEEE Abstract We present the design and implementation
More informationROBOT TEAMS CH 12. Experiments with Cooperative Aerial-Ground Robots
ROBOT TEAMS CH 12 Experiments with Cooperative Aerial-Ground Robots Gaurav S. Sukhatme, James F. Montgomery, and Richard T. Vaughan Speaker: Jeff Barnett Paper Focus Heterogeneous Teams for Surveillance
More informationIMPROVING QUADROTOR 3-AXES STABILIZATION RESULTS USING EMPIRICAL RESULTS AND SYSTEM IDENTIFICATION
IMPROVING QUADROTOR 3-AXES STABILIZATION RESULTS USING EMPIRICAL RESULTS AND SYSTEM IDENTIFICATION Övünç Elbir & Electronics Eng. oelbir@etu.edu.tr Anıl Ufuk Batmaz & Electronics Eng. aubatmaz@etu.edu.tr
More informationDesigning Simple Indoor Navigation System for UAVs
19th Mediterranean Conference on Control and Automation Aquis Corfu Holiday Palace, Corfu, Greece June 2-23, 211 ThBT3.1 Designing Simple Indoor Navigation System for UAVs Mohamed Kara Mohamed, Sourav
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationUNIVERSAL CONTROL METHODOLOGY DESIGN AND IMPLEMENTATION FOR UNMANNED VEHICLES. 8 th April 2010 Phang Swee King
UNIVERSAL CONTROL METHODOLOGY DESIGN AND IMPLEMENTATION FOR UNMANNED VEHICLES 8 th April 2010 Phang Swee King OUTLINES Introduction Platform Design Helicopter Avionics System Ground Station Sensors Measurement
More informationDEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION
2012 IEEE International Conference on Multimedia and Expo Workshops DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION Yasir Salih and Aamir S. Malik, Senior Member IEEE Centre for Intelligent
More informationRelating Local Vision Measurements to Global Navigation Satellite Systems Using Waypoint Based Maps
Relating Local Vision Measurements to Global Navigation Satellite Systems Using Waypoint Based Maps John W. Allen Samuel Gin College of Engineering GPS and Vehicle Dynamics Lab Auburn University Auburn,
More informationIRIS SEGMENTATION OF NON-IDEAL IMAGES
IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322
More informationSymmetry Based Semantic Analysis of Engineering Drawings
Symmetry Based Semantic Analysis of Engineering Drawings Thomas C. Henderson, Narong Boonsirisumpun, and Anshul Joshi University of Utah, SLC, UT, USA; tch at cs.utah.edu Abstract Engineering drawings
More informationQuadrotor Control Using Dual Camera Visual Feedback
Proceedings of the 3 IEEE International Conference on Robotics & Automation Taipei, Taiwan, September 14-19, 3 Quadrotor Control Using Dual Camera Visual Feedback Erdinç Altuğ, James P. Ostrowski, Camillo
More informationVISION-BASED UAV FLIGHT CONTROL AND OBSTACLE AVOIDANCE. Zhihai He, Ram Venkataraman Iyer, and Phillip R. Chandler
VISION-BASED UAV FLIGHT CONTROL AND OBSTACLE AVOIDANCE Zhihai He, Ram Venkataraman Iyer, and Phillip R Chandler ABSTRACT In this work, we explore various ideas and approaches to deal with the inherent
More informationExterior Orientation Parameters
Exterior Orientation Parameters PERS 12/2001 pp 1321-1332 Karsten Jacobsen, Institute for Photogrammetry and GeoInformation, University of Hannover, Germany The georeference of any photogrammetric product
More informationNavigational Aids 1 st Semester/2007/TF 7:30 PM -9:00 PM
Glossary of Navigation Terms accelerometer. A device that senses inertial reaction to measure linear or angular acceleration. In its simplest form, it consists of a case-mounted spring and mass arrangement
More informationRectification Algorithm for Linear Pushbroom Image of UAV
Rectification Algorithm for Linear Pushbroom Image of UAV Ruoming SHI and Ling ZHU INTRODUCTION In recent years, unmanned aerial vehicle (UAV) has become a strong supplement and an important complement
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationAvailable online at Procedia Engineering 7 (2010) Procedia Engineering 00 (2010)
Available online at www.sciencedirect.com Procedia Engineering 7 (2010) 290 296 Procedia Engineering 00 (2010) 000 000 Procedia Engineering www.elsevier.com/locate/procedia www.elsevier.com/locate/procedia
More informationA Novel Marker Based Tracking Method for Position and Attitude Control of MAVs
A Novel Marker Based Tracking Method for Position and Attitude Control of MAVs A. Masselli and A. Zell Abstract In this paper we present a novel method for pose estimation for micro aerial vehicles (MAVs),
More informationImproving Vision-Based Distance Measurements using Reference Objects
Improving Vision-Based Distance Measurements using Reference Objects Matthias Jüngel, Heinrich Mellmann, and Michael Spranger Humboldt-Universität zu Berlin, Künstliche Intelligenz Unter den Linden 6,
More informationAnno accademico 2006/2007. Davide Migliore
Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?
More informationEE6102 Multivariable Control Systems
EE612 Multivariable Control Systems Homework Assignments for Part 2 Prepared by Ben M. Chen Department of Electrical & Computer Engineering National University of Singapore April 21, 212 EE612 Multivariable
More informationOBJECT SORTING IN MANUFACTURING INDUSTRIES USING IMAGE PROCESSING
OBJECT SORTING IN MANUFACTURING INDUSTRIES USING IMAGE PROCESSING Manoj Sabnis 1, Vinita Thakur 2, Rujuta Thorat 2, Gayatri Yeole 2, Chirag Tank 2 1 Assistant Professor, 2 Student, Department of Information
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 02 130124 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Basics Image Formation Image Processing 3 Intelligent
More informationMonocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads
Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August 14-15, 2014 Paper No. 127 Monocular Vision Based Autonomous Navigation for Arbitrarily
More informationDepth Estimation Using Monocular Camera
Depth Estimation Using Monocular Camera Apoorva Joglekar #, Devika Joshi #, Richa Khemani #, Smita Nair *, Shashikant Sahare # # Dept. of Electronics and Telecommunication, Cummins College of Engineering
More informationMarker Based Localization of a Quadrotor. Akshat Agarwal & Siddharth Tanwar
Marker Based Localization of a Quadrotor Akshat Agarwal & Siddharth Tanwar Objective Introduction Objective: To implement a high level control pipeline on a quadrotor which could autonomously take-off,
More informationCentre for Digital Image Measurement and Analysis, School of Engineering, City University, Northampton Square, London, ECIV OHB
HIGH ACCURACY 3-D MEASUREMENT USING MULTIPLE CAMERA VIEWS T.A. Clarke, T.J. Ellis, & S. Robson. High accuracy measurement of industrially produced objects is becoming increasingly important. The techniques
More informationNoise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions
Others -- Noise Removal Techniques -- Edge Detection Techniques -- Geometric Operations -- Color Image Processing -- Color Spaces Xiaojun Qi Noise Model The principal sources of noise in digital images
More informationA 3-D Scanner Capturing Range and Color for the Robotics Applications
J.Haverinen & J.Röning, A 3-D Scanner Capturing Range and Color for the Robotics Applications, 24th Workshop of the AAPR - Applications of 3D-Imaging and Graph-based Modeling, May 25-26, Villach, Carinthia,
More informationLocating 1-D Bar Codes in DCT-Domain
Edith Cowan University Research Online ECU Publications Pre. 2011 2006 Locating 1-D Bar Codes in DCT-Domain Alexander Tropf Edith Cowan University Douglas Chai Edith Cowan University 10.1109/ICASSP.2006.1660449
More informationCIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS
CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS Setiawan Hadi Mathematics Department, Universitas Padjadjaran e-mail : shadi@unpad.ac.id Abstract Geometric patterns generated by superimposing
More informationA Robust Two Feature Points Based Depth Estimation Method 1)
Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence
More informationUnderstanding Tracking and StroMotion of Soccer Ball
Understanding Tracking and StroMotion of Soccer Ball Nhat H. Nguyen Master Student 205 Witherspoon Hall Charlotte, NC 28223 704 656 2021 rich.uncc@gmail.com ABSTRACT Soccer requires rapid ball movements.
More informationSelf-calibration of a pair of stereo cameras in general position
Self-calibration of a pair of stereo cameras in general position Raúl Rojas Institut für Informatik Freie Universität Berlin Takustr. 9, 14195 Berlin, Germany Abstract. This paper shows that it is possible
More informationProduct information. Hi-Tech Electronics Pte Ltd
Product information Introduction TEMA Motion is the world leading software for advanced motion analysis. Starting with digital image sequences the operator uses TEMA Motion to track objects in images,
More informationEstimation of Altitude and Vertical Velocity for Multirotor Aerial Vehicle using Kalman Filter
Estimation of Altitude and Vertical Velocity for Multirotor Aerial Vehicle using Kalman Filter Przemys law G asior, Stanis law Gardecki, Jaros law Gośliński and Wojciech Giernacki Poznan University of
More informationUnmanned Aerial Vehicles
Unmanned Aerial Vehicles Embedded Control Edited by Rogelio Lozano WILEY Table of Contents Chapter 1. Aerodynamic Configurations and Dynamic Models 1 Pedro CASTILLO and Alejandro DZUL 1.1. Aerodynamic
More informationEXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,
School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45
More informationMobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS
Mobile Robotics Mathematics, Models, and Methods Alonzo Kelly Carnegie Mellon University HI Cambridge UNIVERSITY PRESS Contents Preface page xiii 1 Introduction 1 1.1 Applications of Mobile Robots 2 1.2
More informationCoplanar circles, quasi-affine invariance and calibration
Image and Vision Computing 24 (2006) 319 326 www.elsevier.com/locate/imavis Coplanar circles, quasi-affine invariance and calibration Yihong Wu *, Xinju Li, Fuchao Wu, Zhanyi Hu National Laboratory of
More informationFace Recognition At-a-Distance Based on Sparse-Stereo Reconstruction
Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,
More informationComparison between Various Edge Detection Methods on Satellite Image
Comparison between Various Edge Detection Methods on Satellite Image H.S. Bhadauria 1, Annapurna Singh 2, Anuj Kumar 3 Govind Ballabh Pant Engineering College ( Pauri garhwal),computer Science and Engineering
More informationHOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder
HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder Masashi Awai, Takahito Shimizu and Toru Kaneko Department of Mechanical
More informationMultiple View Motion Estimation and Control for Landing an Unmanned Aerial Vehicle
Multiple View Motion Estimation and Control for Landing an Unmanned Aerial Vehicle Omid Shakernia, René Vidal, Courtney S Sharp, Yi Ma, Shankar Sastry Department of EECS, UC Berkeley Department of ECE,
More informationCAMERA GIMBAL PERFORMANCE IMPROVEMENT WITH SPINNING-MASS MECHANICAL GYROSCOPES
8th International DAAAM Baltic Conference "INDUSTRIAL ENGINEERING 19-21 April 2012, Tallinn, Estonia CAMERA GIMBAL PERFORMANCE IMPROVEMENT WITH SPINNING-MASS MECHANICAL GYROSCOPES Tiimus, K. & Tamre, M.
More informationCOMPUTER VISION. Dr. Sukhendu Das Deptt. of Computer Science and Engg., IIT Madras, Chennai
COMPUTER VISION Dr. Sukhendu Das Deptt. of Computer Science and Engg., IIT Madras, Chennai 600036. Email: sdas@iitm.ac.in URL: //www.cs.iitm.ernet.in/~sdas 1 INTRODUCTION 2 Human Vision System (HVS) Vs.
More informationAUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S
AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S Radha Krishna Rambola, Associate Professor, NMIMS University, India Akash Agrawal, Student at NMIMS University, India ABSTRACT Due to the
More informationOn Road Vehicle Detection using Shadows
On Road Vehicle Detection using Shadows Gilad Buchman Grasp Lab, Department of Computer and Information Science School of Engineering University of Pennsylvania, Philadelphia, PA buchmag@seas.upenn.edu
More informationRobust and Accurate Detection of Object Orientation and ID without Color Segmentation
0 Robust and Accurate Detection of Object Orientation and ID without Color Segmentation Hironobu Fujiyoshi, Tomoyuki Nagahashi and Shoichi Shimizu Chubu University Japan Open Access Database www.i-techonline.com
More informationRectangular Coordinates in Space
Rectangular Coordinates in Space Philippe B. Laval KSU Today Philippe B. Laval (KSU) Rectangular Coordinates in Space Today 1 / 11 Introduction We quickly review one and two-dimensional spaces and then
More informationVision Review: Image Formation. Course web page:
Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some
More informationMotion estimation of unmanned marine vehicles Massimo Caccia
Motion estimation of unmanned marine vehicles Massimo Caccia Consiglio Nazionale delle Ricerche Istituto di Studi sui Sistemi Intelligenti per l Automazione Via Amendola 122 D/O, 70126, Bari, Italy massimo.caccia@ge.issia.cnr.it
More informationProf. Fanny Ficuciello Robotics for Bioengineering Visual Servoing
Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level
More informationAn Angle Estimation to Landmarks for Autonomous Satellite Navigation
5th International Conference on Environment, Materials, Chemistry and Power Electronics (EMCPE 2016) An Angle Estimation to Landmarks for Autonomous Satellite Navigation Qing XUE a, Hongwen YANG, Jian
More informationRobust Color Choice for Small-size League RoboCup Competition
Robust Color Choice for Small-size League RoboCup Competition Qiang Zhou Limin Ma David Chelberg David Parrott School of Electrical Engineering and Computer Science, Ohio University Athens, OH 45701, U.S.A.
More informationThe Institute of Telecommunications and Computer Sciences, UTP University of Science and Technology, Bydgoszcz , Poland
Computer Technology and Application 6 (2015) 64-69 doi: 10.17265/1934-7332/2015.02.002 D DAVID PUBLISHIN An Image Analysis of Breast Thermograms Ryszard S. Choras The Institute of Telecommunications and
More informationProject report Augmented reality with ARToolKit
Project report Augmented reality with ARToolKit FMA175 Image Analysis, Project Mathematical Sciences, Lund Institute of Technology Supervisor: Petter Strandmark Fredrik Larsson (dt07fl2@student.lth.se)
More informationMORPHOLOGICAL EDGE DETECTION AND CORNER DETECTION ALGORITHM USING CHAIN-ENCODING
MORPHOLOGICAL EDGE DETECTION AND CORNER DETECTION ALGORITHM USING CHAIN-ENCODING Neeta Nain, Vijay Laxmi, Ankur Kumar Jain & Rakesh Agarwal Department of Computer Engineering Malaviya National Institute
More informationCourse 23: Multiple-View Geometry For Image-Based Modeling
Course 23: Multiple-View Geometry For Image-Based Modeling Jana Kosecka (CS, GMU) Yi Ma (ECE, UIUC) Stefano Soatto (CS, UCLA) Rene Vidal (Berkeley, John Hopkins) PRIMARY REFERENCE 1 Multiple-View Geometry
More informationFAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES
FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES Jie Shao a, Wuming Zhang a, Yaqiao Zhu b, Aojie Shen a a State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing
More informationMETRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS
METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires
More informationVision Based Tracking for Unmanned Aerial Vehicle
Advances in Aerospace Science and Applications. ISSN 2277-3223 Volume 4, Number 1 (2014), pp. 59-64 Research India Publications http://www.ripublication.com/aasa.htm Vision Based Tracking for Unmanned
More informationChapters 1 9: Overview
Chapters 1 9: Overview Chapter 1: Introduction Chapters 2 4: Data acquisition Chapters 5 9: Data manipulation Chapter 5: Vertical imagery Chapter 6: Image coordinate measurements and refinements Chapters
More informationCamera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006
Camera Registration in a 3D City Model Min Ding CS294-6 Final Presentation Dec 13, 2006 Goal: Reconstruct 3D city model usable for virtual walk- and fly-throughs Virtual reality Urban planning Simulation
More informationComputer and Machine Vision
Computer and Machine Vision Lecture Week 4 Part-2 February 5, 2014 Sam Siewert Outline of Week 4 Practical Methods for Dealing with Camera Streams, Frame by Frame and De-coding/Re-encoding for Analysis
More informationDevelopment of 3D Positioning Scheme by Integration of Multiple Wiimote IR Cameras
Proceedings of the 5th IIAE International Conference on Industrial Application Engineering 2017 Development of 3D Positioning Scheme by Integration of Multiple Wiimote IR Cameras Hui-Yuan Chan *, Ting-Hao
More informationDETECTION OF 3D POINTS ON MOVING OBJECTS FROM POINT CLOUD DATA FOR 3D MODELING OF OUTDOOR ENVIRONMENTS
DETECTION OF 3D POINTS ON MOVING OBJECTS FROM POINT CLOUD DATA FOR 3D MODELING OF OUTDOOR ENVIRONMENTS Tsunetake Kanatani,, Hideyuki Kume, Takafumi Taketomi, Tomokazu Sato and Naokazu Yokoya Hyogo Prefectural
More informationConstruction and Calibration of a Low-Cost 3D Laser Scanner with 360º Field of View for Mobile Robots
Construction and Calibration of a Low-Cost 3D Laser Scanner with 360º Field of View for Mobile Robots Jorge L. Martínez, Jesús Morales, Antonio, J. Reina, Anthony Mandow, Alejandro Pequeño-Boter*, and
More informationMETR4202: ROBOTICS & AUTOMATION
Sort Pattern A This exam paper must not be removed from the venue School of Information Technology and Electrical Engineering Mid-Term Quiz METR4202: ROBOTICS & AUTOMATION September 20, 2017 First Name:
More informationStudy on the Signboard Region Detection in Natural Image
, pp.179-184 http://dx.doi.org/10.14257/astl.2016.140.34 Study on the Signboard Region Detection in Natural Image Daeyeong Lim 1, Youngbaik Kim 2, Incheol Park 1, Jihoon seung 1, Kilto Chong 1,* 1 1567
More informationRobust color segmentation algorithms in illumination variation conditions
286 CHINESE OPTICS LETTERS / Vol. 8, No. / March 10, 2010 Robust color segmentation algorithms in illumination variation conditions Jinhui Lan ( ) and Kai Shen ( Department of Measurement and Control Technologies,
More informationTEPZZ 85 9Z_A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION
(19) TEPZZ 8 9Z_A_T (11) EP 2 83 901 A1 (12) EUROPEAN PATENT APPLICATION (43) Date of publication: 01.04.1 Bulletin 1/14 (21) Application number: 141861.1 (1) Int Cl.: G01P 21/00 (06.01) G01C 2/00 (06.01)
More informationTEST RESULTS OF A GPS/INERTIAL NAVIGATION SYSTEM USING A LOW COST MEMS IMU
TEST RESULTS OF A GPS/INERTIAL NAVIGATION SYSTEM USING A LOW COST MEMS IMU Alison K. Brown, Ph.D.* NAVSYS Corporation, 1496 Woodcarver Road, Colorado Springs, CO 891 USA, e-mail: abrown@navsys.com Abstract
More informationObject Shape Recognition in Image for Machine Vision Application
Object Shape Recognition in Image for Machine Vision Application Mohd Firdaus Zakaria, Hoo Seng Choon, and Shahrel Azmin Suandi Abstract Vision is the most advanced of our senses, so it is not surprising
More informationCalibration of Inertial Measurement Units Using Pendulum Motion
Technical Paper Int l J. of Aeronautical & Space Sci. 11(3), 234 239 (2010) DOI:10.5139/IJASS.2010.11.3.234 Calibration of Inertial Measurement Units Using Pendulum Motion Keeyoung Choi* and Se-ah Jang**
More informationSensory Augmentation for Increased Awareness of Driving Environment
Sensory Augmentation for Increased Awareness of Driving Environment Pranay Agrawal John M. Dolan Dec. 12, 2014 Technologies for Safe and Efficient Transportation (T-SET) UTC The Robotics Institute Carnegie
More informationVisualisation Pipeline : The Virtual Camera
Visualisation Pipeline : The Virtual Camera The Graphics Pipeline 3D Pipeline The Virtual Camera The Camera is defined by using a parallelepiped as a view volume with two of the walls used as the near
More informationConstruction, Modeling and Automatic Control of a UAV Helicopter
Construction, Modeling and Automatic Control of a UAV Helicopter BEN M. CHENHEN EN M. C Department of Electrical and Computer Engineering National University of Singapore 1 Outline of This Presentation
More informationAUTONOMOUS PLANETARY ROVER CONTROL USING INVERSE SIMULATION
AUTONOMOUS PLANETARY ROVER CONTROL USING INVERSE SIMULATION Kevin Worrall (1), Douglas Thomson (1), Euan McGookin (1), Thaleia Flessa (1) (1)University of Glasgow, Glasgow, G12 8QQ, UK, Email: kevin.worrall@glasgow.ac.uk
More informationAn ICA based Approach for Complex Color Scene Text Binarization
An ICA based Approach for Complex Color Scene Text Binarization Siddharth Kherada IIIT-Hyderabad, India siddharth.kherada@research.iiit.ac.in Anoop M. Namboodiri IIIT-Hyderabad, India anoop@iiit.ac.in
More informationECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt
ECE 47: Homework 5 Due Tuesday, October 7 in class @:3pm Seth Hutchinson Luke A Wendt ECE 47 : Homework 5 Consider a camera with focal length λ = Suppose the optical axis of the camera is aligned with
More informationCOMPUTER AND ROBOT VISION
VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington A^ ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California
More informationCamera Drones Lecture 2 Control and Sensors
Camera Drones Lecture 2 Control and Sensors Ass.Prof. Friedrich Fraundorfer WS 2017 1 Outline Quadrotor control principles Sensors 2 Quadrotor control - Hovering Hovering means quadrotor needs to hold
More informationReconstructing Images of Bar Codes for Construction Site Object Recognition 1
Reconstructing Images of Bar Codes for Construction Site Object Recognition 1 by David E. Gilsinn 2, Geraldine S. Cheok 3, Dianne P. O Leary 4 ABSTRACT: This paper discusses a general approach to reconstructing
More informationDept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan
An Application of Vision-Based Learning for a Real Robot in RoboCup - A Goal Keeping Behavior for a Robot with an Omnidirectional Vision and an Embedded Servoing - Sho ji Suzuki 1, Tatsunori Kato 1, Hiroshi
More informationCamera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences
Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences Jean-François Lalonde, Srinivasa G. Narasimhan and Alexei A. Efros {jlalonde,srinivas,efros}@cs.cmu.edu CMU-RI-TR-8-32 July
More informationDocument Image Restoration Using Binary Morphological Filters. Jisheng Liang, Robert M. Haralick. Seattle, Washington Ihsin T.
Document Image Restoration Using Binary Morphological Filters Jisheng Liang, Robert M. Haralick University of Washington, Department of Electrical Engineering Seattle, Washington 98195 Ihsin T. Phillips
More information