UAV Position and Attitude Sensoring in Indoor Environment Using Cameras

Similar documents
Autonomous Vehicle Navigation Using Stereoscopic Imaging

Autonomous Navigation for Flying Robots

Formation Control of Crazyflies

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles

Project 1 : Dead Reckoning and Tracking

Estimation of Altitude and Vertical Velocity for Multirotor Aerial Vehicle using Kalman Filter

Exterior Orientation Parameters

A Real Time Vision System for Robotic Soccer

International Journal of Advance Engineering and Research Development

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

IMPROVING QUADROTOR 3-AXES STABILIZATION RESULTS USING EMPIRICAL RESULTS AND SYSTEM IDENTIFICATION

Autonomous Vehicle Navigation Using Stereoscopic Imaging

Robust and Accurate Detection of Object Orientation and ID without Color Segmentation

Reality Modeling Drone Capture Guide

Autonomous Indoor Hovering with a Quadrotor

EECS 4330/7330 Introduction to Mechatronics and Robotic Vision, Fall Lab 1. Camera Calibration

Lecture: Autonomous micro aerial vehicles

LUMS Mine Detector Project

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

An Angle Estimation to Landmarks for Autonomous Satellite Navigation

Autonomous Landing of an Unmanned Aerial Vehicle

Development of 3D Positioning Scheme by Integration of Multiple Wiimote IR Cameras

Robot Vision Control of robot motion from video. M. Jagersand

Nao Devils Dortmund. Team Description Paper for RoboCup Matthias Hofmann, Ingmar Schwarz, and Oliver Urbann

Chapters 1 9: Overview

Reference Tracking System for a Mobile Skid Steer Robot (Version 1.00)

Calibration of a rotating multi-beam Lidar

Vision-based Localization of an Underwater Robot in a Structured Environment

BabyTigers-98: Osaka Legged Robot Team

CV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more

Horus: Object Orientation and Id without Additional Markers

Self-calibration of a pair of stereo cameras in general position

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

A Hybrid Software Platform for Sony AIBO Robots

Sensor Modalities. Sensor modality: Different modalities:

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

ViconMAVLink: A SOFTWARE TOOL FOR INDOOR POSITIONING USING A MOTION CAPTURE SYSTEM

arxiv: v1 [cs.cv] 28 Sep 2018

Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern

POME A mobile camera system for accurate indoor pose

Unmanned Aerial Vehicles

Chapters 1 7: Overview

Rectification Algorithm for Linear Pushbroom Image of UAV

Control of a quadrotor manipulating a beam (2 projects available)

Graph-based SLAM (Simultaneous Localization And Mapping) for Bridge Inspection Using UAV (Unmanned Aerial Vehicle)

BIM-based Indoor Positioning Technology Using a Monocular Camera

Computer Vision Based General Object Following for GPS-denied Multirotor Unmanned Vehicles

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than

Geometry of Aerial photogrammetry. Panu Srestasathiern, PhD. Researcher Geo-Informatics and Space Technology Development Agency (Public Organization)

Gaussian Process-based Visual Servoing Framework for an Aerial Parallel Manipulator

Autonomous navigation in industrial cluttered environments using embedded stereo-vision

Politecnico di Milano

Simple and Robust Tracking of Hands and Objects for Video-based Multimedia Production

Terminal Phase Vision-Based Target Recognition and 3D Pose Estimation for a Tail-Sitter, Vertical Takeoff and Landing Unmanned Air Vehicle

VIBRATION ISOLATION USING A MULTI-AXIS ROBOTIC PLATFORM G.

Monocular Vision-based Displacement Measurement System Robust to Angle and Distance Using Homography

A Robust Two Feature Points Based Depth Estimation Method 1)

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES

Inertial Navigation Static Calibration

Indoor UAV Positioning Using Stereo Vision Sensor

Project Title: Welding Machine Monitoring System Phase II. Name of PI: Prof. Kenneth K.M. LAM (EIE) Progress / Achievement: (with photos, if any)

ANALYZING OBJECT DIMENSIONS AND CONTROLLING ARTICULATED ROBOT SYSTEM USING 3D VISIONARY SENSOR

Flexible Calibration of a Portable Structured Light System through Surface Plane

A Control System for an Unmanned Micro Aerial Vehicle

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK

Computationally Efficient Visual-inertial Sensor Fusion for GPS-denied Navigation on a Small Quadrotor

3D Geometry and Camera Calibration

Absolute Scale Structure from Motion Using a Refractive Plate

A Novel Marker Based Tracking Method for Position and Attitude Control of MAVs

3D HAND LOCALIZATION BY LOW COST WEBCAMS

Rectification and Disparity

Robust Color Choice for Small-size League RoboCup Competition

[2] J. "Kinematics," in The International Encyclopedia of Robotics, R. Dorf and S. Nof, Editors, John C. Wiley and Sons, New York, 1988.

Smartphone Video Guidance Sensor for Small Satellites

CHAPTER 5 ARRAYS AND MATRICES. Copyright 2013 Pearson Education, Inc.

Cooperative Vision Based Estimation and Tracking Using Multiple UAVs

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

Outline. ETN-FPI Training School on Plenoptic Sensing

AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S

Research and implementation of a simulation method of electric transmission line inspection using UAV based on virtual reality

Calibrating an Overhead Video Camera

A Hardware-In-the-Loop Simulation and Test for Unmanned Ground Vehicle on Indoor Environment

Unscented Kalman Filter for Vision Based Target Localisation with a Quadrotor

UNIVERSAL CONTROL METHODOLOGY DESIGN AND IMPLEMENTATION FOR UNMANNED VEHICLES. 8 th April 2010 Phang Swee King

Chapter 13. Vision Based Guidance. Beard & McLain, Small Unmanned Aircraft, Princeton University Press, 2012,

OBSTACLE DETECTION USING STRUCTURED BACKGROUND

CS 4758 Robot Navigation Through Exit Sign Detection

Real Time Motion Detection Using Background Subtraction Method and Frame Difference

UCSD AUVSI Unmanned Aerial System Team. Joshua Egbert Shane Grant

Product information. Hi-Tech Electronics Pte Ltd

Measurement of Pedestrian Groups Using Subtraction Stereo

Modern Robotics Inc. Sensor Documentation

ROBOT TEAMS CH 12. Experiments with Cooperative Aerial-Ground Robots

AWireless sensor network has been successfully applied to. Vision-Based Coordinated Localization for Mobile Sensor Networks

An actor-critic reinforcement learning controller for a 2-DOF ball-balancer

Eagle Knights 2007: Four-Legged League

Planar pattern for automatic camera calibration

Transcription:

UAV Position and Attitude Sensoring in Indoor Environment Using Cameras 1 Peng Xu Abstract There are great advantages of indoor experiment for UAVs. Test flights of UAV in laboratory is more convenient, more cost effective, and safer than outdoor test in general. However, the position sensors widely used in UAVs are GPS based, which depends on satellite signals only receivable in open ground. Moreover, the accuracy of off-shelf GPS sensor is in meters, which does not qualify in small scale UAV test. The method proposed in this paper adapts cameras as sensors, and utilizes computer vision technics to processes visual information and extracts UAV s position and attitude. This method is implemented on Microsoft Windows platform and tested in Clemson University UAV laboratory. The test results shows satisfying accuracy. UAV, Sensor, Indoor Test, Computer Vision Index Terms I. INTRODUCTION UAV (Unmanned Aerial Vehicle) research usually demands lots of test flights in addition to simulation. This is due to the incomplete model of the UAV and the difficulties in reproducing real world disturbance, including ground effect, lower level dynamics of motors and propellers, etc. Experiment in laboratory is sufficient to obtain real world fligh data while the safty issues could be easily handled. Indoor tests demand lower cost and shorter preparing time than outdoor ones as well. However, common navigation sensors which provide position and attitude in UAV projects requires GPS signals that is only available in open field. This is a primary problem that stop researchers from indoor test flights. To solve this problem, Guenard [1] et al. proposed a visual servoing UAV system that uses onboard camera for navigation. Bethke et al. equiped onboard cameras on SWARM formed by multiple UAVs to enhance the performance of target tracking and motion estimation [2]. Both of these two approaches are highly concentrated on tracking applications. Thus, the UAVs do not sense the real world coordinates. Instead, their goals are minimizing the position error with other objects in the world. This is not sufficient if the UAV is performing a task involving absolute coordinates. For example, a UAV might commanded to work between two sites, and there is unknown or few clues for visual tracking to help navigating UAV between these two sites. Moreover, due to the relatively limited computing capability on the UAV, onboard cameras need transmitters to send the video signals back to control station. Camera and transmitter both consumes precious maximum payload of UAV, which could be utilized elsewhere. The approach proposed in this paper uses two orthogonal placed cameras to detect color labels on the UAV. A UAV position and attitude estimator takes the color blobs position information on image plane as input and outputs the position and attititude information in homogenous coordinates form. In this paper the hardware settings and the underlying theory will be covered first to make the further discuss straight forward. Then the calibration procedure will be described which is essential for a functional system. Finally, we will see some experimental results will show the performance of the system. Peng Xu is with the Department of Electrical and Computer Engineering, Clemson University, Clemson, SC 29631. Phone: +1 864 633 9826, e-mail: pxu@clemson.edu

2 UAV POSITION AND ATTITUDE SENSORING IN INDOOR ENVIRONMENT USING CAMERAS Fig. 1. Draganflyer X-Pro quadrotor Fig. 2. Color labels on quadrotor II. METHOD The proposed system is consisted of three primary parts: hardwares, color blobs extractor, and UAV world coordinate estimator. They will be described in this section. A. Hardware The sensor system hardware contains two cameras and one computer for processing. These two camera are both off-shelf USB webcams and is responsible for giving full coverage of the UAV workspace and they are placed in orthogonal position at the border of the rectangular workspace. They are placed on the same height above ground, have y axis that pointing to negtive z direction in world coordinate system. These assumptions are only for simplification of formula and further calibration procedure, and does not affect the underlying theory. The UAV we use in this paper is a Draganflyer quadrotor (Fig. 1). It is a highly challenging work to track such a complicated object with 6 DOF motion using low cost webcams in real-time. Color labels are attached to the motors of the quadrotor to simplify the front-end process (Fig. 2). Each motors is labeled with different color paper: motor 1 is with magenta, motor 2 with cyan, motor 3 with green and motor 4 with orange. These colors are abitrarily choosed and any other colors that are easily distinguished from each other by cameras could be optional color set.

XU: UAV POSITION AND ATTITUDE SENSORING IN INDOOR ENVIRONMENT USING CAMERAS 3 B. Color Blob Extractor The color blob extractor serves as a front-end of the system. It processes the video frames and outputs the color blobs position on image plane. There are two major concerns in developing the color blob extractor: first, it should be fast enough to fulfil the real-time performance object; second, it should give relatively accurate results and resist to noise and changes in luminance conditions for practical reasons. Many research have been done in color blob extraction, especially in robot competitions. Various color models are investigated to minimize the effect of luminance conditions. Brusey and Padgham proposed a decision tree method to achieve real-time color block extraction. We stick to RGB color models because this is the format the camera chips output and the others are all linear or nonlinear combinations the raw RGB data. A 32x32x32 color lookup table is adopted to ensure the process speed while cost a little portion of memory in modern computer. In this way, the color classification is as simple as shift the value in 3 color channels and then look up the table based on the result. Due to the flexbility of the look up table, luminance condition variance could be handled by appropriately setup the table during training stage. A two-pass centroid finding algorithm is adopted to filter noise from camera and objects with similar color. We assume the color label is dominant in the color interested, thus the color blob should reside in the standard deviation circle. So in the first pass, the general position of the color blob centroid is found with the effect of noise. In the second pass, only pixels lie in the rectangle of standard deviation is consider for centroid calculation. In this way, noisy pixels that is far away from the actual centroid is eliminated. Although the noisy pixels near the centroid is not guranted to disappear, they have little influence on the centroid position. C. World Coordinate Estimator The world coordinate estimator is based on the stereo vision theory and knowledge of UAV geometry. First, world coordinate of color blob is calculated. Then the position and attitude of the UAV is calculated via geometry equations. Since we already have assumptions on camera position and orientation, it is possible to directly use homogenous transform to locate the color blob in real world. The formula of real world coordinate and image pixel coordinate mapping is listed in (1), where P is the combined camera parameters that contains internal, projective and external parameters, x and y are coordinate in camera, x,y,z are real world coordinate. x y w = P x y z 1 (1) Mapped on the image plane, x i = x andy w i = y are held, where x w i and y i are image pixel coordinate. P could be represent as (2), where p 1 P = p 2 (2) p 3

4 UAV POSITION AND ATTITUDE SENSORING IN INDOOR ENVIRONMENT USING CAMERAS Thus, the image coordinate can be obtained by the two equations below. x i = x = p [ ] T 1 x y z 1 w p 3 [ x y z 1 ] T (3) y i = y = p [ ] T 2 x y z 1 w p 3 [ x y z 1 ] T (4) If p i is further split to [p i123 p i4 ], where p i123 is first three numbers in p i and p i4 is the number at the last column, the formula above could be reshaped as follows. (p 1123 x i p 3123 ) [ x y z ] T = p14 + x i p 34 (5) (p 2123 y i p 3123 ) [ x y z ] T = p24 + y i p 34 (6) In this way, two camera could give four equations for x, y and z with two for each, which means this is a over determined system. Thus, x, y and z could be solved by least square if both camera could detect the same color blob. However, in some rare cases it is not true. Then the knowledge in UAV geometry could help to solve the real world coordinate. Consider 4 motors in the quadrotor form a square in 3D space, if any two of these points have known coordinate, which means they could be detected by both cameras, coordinates of the other two could be solved with informatino from only one camera. There are two possible scenario: 1) two color labels on one edge is detected by both cameras and the other only detected by one camera. The equation of the plane orthogonal to the edge and on the centroid of the motors could be obtained by simple homogenous transform. Coordinate of other color blobs will also fit one of these two equation. With three equations, two from camera one from the geometry, the real world coordinate of the color blob could be easily solved. 2) two color labels on the diagonal line is detected by both cameras and the other detected by one camera. In this case, similar geometry could be applied for solving the coordinate. However, instead of having the plane pass the end of edge, the plane pass the center of the diagonal line. Distortion fix is also important for this application. We use the distortion model describe in (7), where x d and y d is distort coordinate, x n and y n is image coordinate without distortion, r 2 = x 2 n + y 2 n, k ci is distortion characteristic parameters of cameras, dx is tangential distortion parameter [4]. [ xd y d ] = ( 1 + k c1 r 2 + k c2 r 4 + k c3 r 6) [ x n y n ] + dx (7) In practice, the distortion is not very high and the tangential distortion is very small. Thus, r 2 d = x 2 d + y 2 d instead of r 2 is used for calculate the undistorted image plane coordinate without great loss in accuracy. After four motors position is obtained, the UAV s position is assumed to be the center of these four points. And the attitude is easy to calculate by vector operation. A. Calibration and Training Procedure III. EXPERIMENT This system need a lot of calibration and training for working properly. Here is the test and calibration procedure.

XU: UAV POSITION AND ATTITUDE SENSORING IN INDOOR ENVIRONMENT USING CAMERAS 5 Fig. 3. Program snapshot Locate the camera in planed orthgonal position. Calibrate internal and projective parameters using MATLAB Camera Calibration Toolbox. Tuning the position of camera to make sure the orientation of camera is right. Training the color blob extractor with some photos taken by the camera. B. Result There is a running snapshot of the program in (Fig. 3). It shows current UAV position and attitude in homogenous transform in the upper left textbox. Below is the video signal received from both cameras. Running on a Pentium D 2.8GHz PC with 1G RAM, the program runs 47.2 Hz in release mode, much higher than the frame frequency of the webcam, which is below 30Hz. The accuracy for position is around +/- 1 centimeter, and the accuracy for attitude is not able to measure. IV. CONCLUSION A method for UAV position and attitude sensoring is proposed. It is consisted of the front-end color blob extractor and the position and attitude estimator. The method is implemented on PC platform. The proposed algorithm is able to supply relatively accurate result in position with real-time performance. Future work may include a TCP/IP server which will cooperate with current UAV controller running on QNX platform and better accuracy and higher update frequency by higher standard cameras and finer tuning of the system REFERENCES [1] Guenard, N., Hamel, T., Mahony, R., A practical Visual Servo Control for a Unmanned Aerial Vehicle, In: Proceedings of the IEEE international conference on robotics and automation, ICRA2002 [2] B. Bethke, M. Valenti, J. P., How and J. Vian, Cooperative Vision Based Estimation and Tracking Using Multiple UAVs, Conference on Cooperative Control and Optimization, Gainesville, FL, January 2007. [3] J. Brusey, L. Padgham, Techniques for Obtaining Robust, Real-Time, Colour-Based Vision for Robotics, RoboCup-99: Robot Soccer World Cup III, 1999. [4] Camera Calibration Toolbox for MATLAB http://www.vision.caltech.edu/bouguetj/calib doc/htmls/parameters.html