Technical Paper of HITCSC Team for The International Aerial Robotics Competition

Similar documents
Turning an Automated System into an Autonomous system using Model-Based Design Autonomous Tech Conference 2018

UAV Autonomous Navigation in a GPS-limited Urban Environment

W4. Perception & Situation Awareness & Decision making

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles

AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S

Lecture 5: Object Detection

Autonomous navigation in industrial cluttered environments using embedded stereo-vision

Research on Evaluation Method of Video Stabilization

Sensor Fusion: Potential, Challenges and Applications. Presented by KVH Industries and Geodetics, Inc. December 2016

UAV Position and Attitude Sensoring in Indoor Environment Using Cameras

Face Detection and Tracking Control with Omni Car

Control of a quadrotor manipulating a beam (2 projects available)

Three-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization

다중센서기반자율시스템의모델설계및개발 이제훈차장 The MathWorks, Inc. 2

Towards Fully-automated Driving. tue-mps.org. Challenges and Potential Solutions. Dr. Gijs Dubbelman Mobile Perception Systems EE-SPS/VCA

An Angle Estimation to Landmarks for Autonomous Satellite Navigation

VISION-BASED UAV FLIGHT CONTROL AND OBSTACLE AVOIDANCE. Zhihai He, Ram Venkataraman Iyer, and Phillip R. Chandler

COMPARISION OF AERIAL IMAGERY AND SATELLITE IMAGERY FOR AUTONOMOUS VEHICLE PATH PLANNING

arxiv: v1 [cs.ro] 2 Sep 2017

Object detection using Region Proposals (RCNN) Ernest Cheung COMP Presentation

TRANSPARENT OBJECT DETECTION USING REGIONS WITH CONVOLUTIONAL NEURAL NETWORK

Autonomous Quadrotor for the 2016 International Aerial Robotics Competition

Visual features detection based on deep neural network in autonomous driving tasks

ASTRIUM Space Transportation

CS 4758 Robot Navigation Through Exit Sign Detection

Pilot Assistive Safe Landing Site Detection System, an Experimentation Using Fuzzy C Mean Clustering

Unmanned Aerial Vehicles

ALGORITHMS FOR DETECTING DISORDERS OF THE BLDC MOTOR WITH DIRECT CONTROL

JOINT DETECTION AND SEGMENTATION WITH DEEP HIERARCHICAL NETWORKS. Zhao Chen Machine Learning Intern, NVIDIA

Estimation of Altitude and Vertical Velocity for Multirotor Aerial Vehicle using Kalman Filter

POTENTIAL ACTIVE-VISION CONTROL SYSTEMS FOR UNMANNED AIRCRAFT

CANAL FOLLOWING USING AR DRONE IN SIMULATION

Lecture 7: Semantic Segmentation

Research on QR Code Image Pre-processing Algorithm under Complex Background

Practice Exam Sample Solutions

Indoor UAV Positioning Using Stereo Vision Sensor

Recognition of a Predefined Landmark Using Optical Flow Sensor/Camera

Lecture: Autonomous micro aerial vehicles

3D Object Recognition and Scene Understanding from RGB-D Videos. Yu Xiang Postdoctoral Researcher University of Washington

Final Exam Practice Fall Semester, 2012

Autonomous Quadrotor for the 2014 International Aerial Robotics Competition

Neuro-adaptive Formation Maintenance and Control of Nonholonomic Mobile Robots

Vision based autonomous driving - A survey of recent methods. -Tejus Gupta

Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks

Martian lava field, NASA, Wikipedia

Marker Based Localization of a Quadrotor. Akshat Agarwal & Siddharth Tanwar

BioTechnology. An Indian Journal FULL PAPER. Trade Science Inc. Research on motion tracking and detection of computer vision ABSTRACT KEYWORDS

Perceiving the 3D World from Images and Videos. Yu Xiang Postdoctoral Researcher University of Washington

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

Dealing with Scale. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Lecture 13 Visual Inertial Fusion

AUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR

Research of Traffic Flow Based on SVM Method. Deng-hong YIN, Jian WANG and Bo LI *

Autonomous Navigation for Flying Robots

Research Article International Journals of Advanced Research in Computer Science and Software Engineering ISSN: X (Volume-7, Issue-7)

AUTONOMOUS DRONE NAVIGATION WITH DEEP LEARNING

The Population Density of Early Warning System Based On Video Image

Obstacle Detection From Roadside Laser Scans. Research Project

3D LIDAR Point Cloud based Intersection Recognition for Autonomous Driving

RGBD Occlusion Detection via Deep Convolutional Neural Networks

Keywords: UAV, Formation flight, Virtual Pursuit Point

IMPROVING QUADROTOR 3-AXES STABILIZATION RESULTS USING EMPIRICAL RESULTS AND SYSTEM IDENTIFICATION

ROBUST IMAGE-BASED CRACK DETECTION IN CONCRETE STRUCTURE USING MULTI-SCALE ENHANCEMENT AND VISUAL FEATURES

ROBOT TEAMS CH 12. Experiments with Cooperative Aerial-Ground Robots

Learning with Side Information through Modality Hallucination

Object Detection. CS698N Final Project Presentation AKSHAT AGARWAL SIDDHARTH TANWAR

Introduction to Autonomous Mobile Robots

Vehicle Localization. Hannah Rae Kerner 21 April 2015

(Deep) Learning for Robot Perception and Navigation. Wolfram Burgard

Object Detection Based on Deep Learning

Efficient Segmentation-Aided Text Detection For Intelligent Robots

Object detection with CNNs

Aerial and Ground-based Collaborative Mapping: An Experimental Study

Scene Text Detection Using Machine Learning Classifiers

Vision-Based Navigation Solution for Autonomous Indoor Obstacle Avoidance Flight

Super Assembling Arms

Visual SLAM for small Unmanned Aerial Vehicles

Vision Based Tracking for Unmanned Aerial Vehicle

Collision-free Path Planning Based on Clustering

ENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning

Visual Perception Sensors

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Combining Deep Reinforcement Learning and Safety Based Control for Autonomous Driving

MULTIPLE DRONE CINEMATOGRAPHY I. Mademlis 1, I. Pitas 1, A. Messina 2

Stable Vision-Aided Navigation for Large-Area Augmented Reality

A Survey of Light Source Detection Methods

Autonomous Flight in Unstructured Environment using Monocular Image

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm)

IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 2, NO. 1, JANUARY Robust and Accurate Monocular Visual Navigation Combining IMU for a Quadrotor

Finding Tiny Faces Supplementary Materials

Simultaneous Localization and Mapping

Face Recognition Technology Based On Image Processing Chen Xin, Yajuan Li, Zhimin Tian

Channel Locality Block: A Variant of Squeeze-and-Excitation

Human Detection and Tracking for Video Surveillance: A Cognitive Science Approach

Autonomous Navigation in Complex Indoor and Outdoor Environments with Micro Aerial Vehicles

A Formal Model Approach for the Analysis and Validation of the Cooperative Path Planning of a UAV Team

Path Planning and Decision-making Control for AUV with Complex Environment

Spring 2016, Robot Autonomy, Final Report (Team 7) Motion Planning for Autonomous All-Terrain Vehicle

Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module

Transcription:

Technical Paper of HITCSC Team for The International Aerial Robotics Competition Qingtao Yu, Yuanhong Li, Hang Yu, Haodi Yao, Rui Xing, Qi Zhao, Xiaowei Xing Harbin Institute of Tecnology ABSTRACT This manuscript introduces the scheme and progress of HITCSC Team for the International Aerial Robotics Competition in 2018. To present our scheme, we mainly depict the physical configuration and the guidance, control and navigation approach. To improve the ability of obstacle avoidance, a combination perception system of camera and lidar is used. Motion estimation technique and deep network based detection model are made use of to sense the status of the aerial robot and the environment around. A hierarchical control block is applied and random planning algorithm is used to solve the guidance and interaction problem. In final, the latest progress is illustrated. INTRODUCTION Problem Statement Mission 7 of the International Aerial Robotics Competition, requires a highly intelligent aircraft to drive autonomous robots towards a specific side of the confined indoor square arena [1]. Under the circumstance of a GPS-denied indoor arena with obstacles moving around, the aircraft are required to do accurate interaction with robots to herd them out of the arena from the green line, by collision or top touching with the robots. The aircraft is required to do the policy decision, path planning and motion control independently and have no communication with the exterior. Conceptual Solution The goal of this competition is to do precise interactions with ground target under the dynamic environment, which requires great sensing and acting ability. To accomplish this task, precise perception of the aerial robot s status and the other mobile robots status is essential. Cameras and lidar are the first choice to sense the circumstance and recover the motion of robots. Another key problem is how to make policy and plan the flying path. Due to the uncertainty of the whole system, local window planning approach is used. To keep system robust and adaptive to the unpredicted factors, real time feedback scheme is our choice. Yearly Milestone This is the fourth year for HITCSC Team to take part in the International Aerial Robotics Competition. This year, a new design is proposed and validated. To solve the threats of obstacles, a combination perception system of camera and lidar is applied. A new physical structure is designed to interact with robots in any direction. The decisionplanning-control framework is modified to improve the efficiency of system. 1

SYSTEM OVERVIEW The hardware structure of the aerial robot is as shown in Figure 1. M100 is chosen as the aerial platform, which can provide enough load capacity. Next Unit Computer provides huge computing power to support the image processing cost and computing consuming of planning algorithm. Except for the built-in Inertial Measurement Unit, the platform is equipped with cameras and lidar, which provide the sensing information of the environment. USB Guidance Attitude Velocity Ground M100 Auto Pilot UART NVIDIA TX2 Imaging Processing Path Planning General Command signal USB USB Camera Downward Image Target USB RPLiDAR Circular Scan Obstacle Figure 1. Physical mechanism of aerial system Sensors Configuration Cameras Cameras provide rich information of the environment, which can be used to sense the variation of the environment and estimate the ego motion. A stereo camera and two mono cameras are installed in the platform to scan the mobile robots and estimate the motion of the aerial robot. The resolution is 320 240 and 640 480 respectively, which can provide rich texture information for the information center. To balance the speed and computing power, the frame rate is set at 20Hz. RPlidar Compared with cameras, lidar is more reliable and the visual field is not affected by the flying height of aerial robot. Facing the drawback of cameras, RPLiDAR designed by SLAM Tech is chosen to fuse with the measurement of cameras. RPLiDAR is mounted on the top of the quadrotor, and is used to detect the obstacle robot. The angular resolution of RPLiDAR is 0.9 degree and the scan frequency is from 7.5Hz to 12.5Hz. To reject the disturbance of noise, two algorithms are implemented. The first algorithm is to cluster a set of points which satisfy the minimum principle. And the cluster with not enough measurements is eliminated. The second algorithm will solve the false detection problem. Only the candidate detected in a local time window continuously is considered as a true obstacle. 2

Power Management System We choose a 5700mAh LiPo battery to support all the electric cost of aerial vehicle system. The power management board supervises the status of battery and distributes power for the four ESCs. If any failure happens, the power management system alarms. Further, a voltage regulator is used to provide power for the low power equipment. Security Measures Automatic obstacle avoidance ability and safety configurations are essential for air vehicles. The safety button, cutting off the power in an emergency, is installed to protect people. A low-mass airframe is designed to avoid hurting people or damaging the device. Except for the passive methods, a sub-system of automatic obstacle avoidance is applied to sense and avoid threats actively. ALGORITHM DESIGN Robotics Planning and Control Path Planning Algorithm A RRT based planning algorithm is used in our program to achieve the goal of interacting with target while avoiding obstacles. First, build a local occupancy map which includes the target, obstacle and quadrotor. In the local map, a modified RRT method is used to generate a collision-free path. Different from the traditional RRT algorithm, the algorithm restricts the random point between quadrotor and target. Second, do the trajectory optimization to make the path smooth so that the path is suitable for quadrotor. Figure 2. Simulation of RRT planning algorithm Motion Control Algorithm N1 autopilot has excellent performance for attitude control. Based on the attitude control of N1, we adopt double close loops structure to control the position and velocity of aircraft, and the hierarchical structure of the control system is as shown in Figure 3. 3

The controller was designed according to the PID law because of the uncertainty. Through tuning the parameters of PID controller, we can make the system stable and satisfy our demands. Figure 3. Control block diagram of aerial system Robots Detection Deep Learning Model The most commonly used target detection networks, like Fast R-CNN [8], Faster R- CNN [9] and YOLO, mostly take the whole image as input. After providing plenty of regions of interest (ROI), it uses bounding box regression to correct the ROI. Target tracking networks, like MDNet [10], FCNT and SO-DLT, make it possible to track specific target. But these networks structures are usually very complicated. The UAV s flight height and attitude can change very rapidly, causing that the scale and angle of the targets keep changing in the visual field of UAV. It can easily take some parts of image, which don t belong to the target, into the target ROI, making it harder for followed processing. Besides, the targets might frequently move in and out of the UAV s vision, causing trouble to target-tracking networks. So we consider the fully convolutional network that is more commonly chosen for semantic segmentation. The fully convolutional network uses convolution layers to replace the fully connected layers and deconvolutes the former feature maps. Eventually the size of the output is same to the input image. This network can classify the picture pixel by pixel theoretically, making it easy for followed processing. The fully convolutional network can also take any size of the picture as input, so the trained network can still be useful if some other changes are made to the project. But in practical circumstances, this network structure also has some flaws as follows: 1) It needs a large number of feature maps if we need the network to finish the classification directly, which can be very stressful to CPU, making it impossible to control the UAV in real time. Furthermore, the network cannot tell the ground robot from the white boundary. 2) If we want to make multi-target classification, we need to classify the ground robot, the red T-board, the green T-board and the obstacle s white cylinder. This will take much more time for the network to process and this network couldn t generate good outcomes if there are not enough feature maps. 4

As the fully convolutional network has these flaws, we decide to take the multi-target classification as a binary classification and build our network s structure as the Figure 4. The network will pool the input image firstly to reduce the size and number of feature maps we need in the followed steps. We add a filter layer after passing through network s softmax layer to clear the interferences of the boundary. We make connected domain judgment and use color classification to classify the target and the obstacle. Figure 4. The fully convolutional network flowchart Position and Direction Calculation Traditional image processing algorithm can distinguish obstacles and objects. The algorithm calculates their position and moving direction in real time as the regions of interest are given by neural network accurately. Figure 5. Follow-up process flowchart With color classification in HSV color space, we can determine if the substance is an obstacle, a red target or a green target. To extract stable contours of objects, we use Gauss Filter to blur the image. Two-dimension Gauss Function is defined as GG(xx, yy) = 1 2ππσσ 2 ee (xx2 +yy 2 )/2σσ 2 Two-dimension position information in the image is obtained by calculating minimum circumscribed rectangle of the contour. Comparing the total pixel value of the area 5

around the vertexes, we get the two-dimension moving direction vector. After coordinate transformation, we can obtain the position and moving direction in the real scene. As for the ROI with obstacle within, to make sure that the UAV can fly safely, we consider the center of region as the position of obstacle. When the object in the camera view is intact, we can get the accurate position and moving direction. Neural Network Training As the fully convolutional network can classify an image pixel by pixel and its structure is relatively simple, it only needs a small amount of training data. The most important problem of training is to balance the percentage of different kind of targets and obstacles with different flying height and angle. The real-time running network will be unstable when it faces some specific circumstances if the training data doesn t include every situation the UAV can face in the competition. Our training data includes 400 images with all the circumstances balanced. The labeled images are same as Figure 6. Figure 6. Example of training data We set the batch size as 100 images a time, and the learning rate is fixed. After training 2200 epochs, the loss stopped to descend. Figure 7. Training loss curve Data Generation As we mentioned above, we use deep learning network for segmentation. This method results may depend on whether the data we use for training can represent the real scene. And it will take a lot of time to record new data and label them in order to enhance the network s performance. Thus we use GAN network to quickly regenerate the data we have to represent the new environment. The test shows that when we move our UAV to a new place, the UAV will performance better when it uses the data we generated to retrain. 6

Figure 8. Control block diagram of aerial system Navigation To finish mission 7, the air vehicle should have the ability of localization under GPSdenied area. In this particular environment with no distinct markers, traditional methods only by artificial landmarks observation don t work. Under this circumstance, a navigation algorithm is proposed by recovering motion from optical flow and localizing by observing artificial landmarks. The algorithm flow is as follows: 1) Estimate the camera s approximate location according to optical flow 2) Extract the grid through Hough Transform Algorithm 3) Estimate the location of grid based on the estimated location of the aerial robot 4) Align the estimated grid with the grid map and solve the location of the aerial robot Figure 4. Grid extration result CONCLUSIONS This paper introduces the approaches and the aircraft s architecture for mission 7 of 7

IARC. Up to now, the configuration of the aircraft was finished and validated. The sensing system has been applied successfully. The planning and control algorithm is proved to be feasible but lacks the ability of interacting with robots when disturbed by obstacles. In the following days, we will focus on the navigation and planning algorithms, to improve the ability of policy-decision and interaction. REFERENCES [1]Junwei Yu. Vision-aided Navigation for Autonomous Aircraft Based on Unscented Kalman Filter. Indonesian Journal of Electrical Engineering, ISSN 2302-4046, 02/2013 [2]Dewei Zhang, et al. The Quadrotor Dynamic Modeling and Indoor Target Tracking Control Method. Mathematical Problems in Engineering 2014(2014). [3]Kangli Wang, Shupeng Lai, et al. An Efficient UAV Navigation Solution for Confined but Partially Known Indoor Environments. IEEE International Conference on Control & Automation(ICCA),2014. [4]Courbon, Jonathan, et al. Vision-based navigation of unmanned aerial vehicles. Control Engineering Practice 18.7(2010):789-799. [5]Feng Lin, Ben M. Chen, et al. Vision-based Formation for UAVs. IEEE International Conference on Control & Automation(ICCA),2014. [6]Mellinger, Daniel, Nathan Michael, and Vijay Kumar. Trajectory generation and control for precise aggressive maneuvers with quadrotors. The International Journal of Robotics Research(2012):0278364911434236. [7]Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. Procession of the 28 th IEEE conference on CVPR. 2015 [8]Girshick R. Fast RCNN. Proceedings of the 2015 IEEE International Conference on Computer Vision. 2015 [9]Ren S, He K, Girshick R, et al. Faster RCNN: Towards real-time object detection with region proposal networks. Neural Information Processing Systems. 2015. [10]Hyeonseob Nam, Bonyung Han: Learning Multi Dimain Convolutional Neural Networks for Visual Tracking. CVPR, 2016. 8