Robust Horizontal Line Detection and Tracking in Occluded Environment for Infrared Cameras

Similar documents
CS 231A Computer Vision (Winter 2014) Problem Set 3

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg

Subpixel Corner Detection Using Spatial Moment 1)

arxiv: v1 [cs.cv] 28 Sep 2018

Implementation of a Face Recognition System for Interactive TV Control System

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

2 Proposed Methodology

POME A mobile camera system for accurate indoor pose

Sky Segmentation by Fusing Clustering with Neural Networks

A Summary of Projective Geometry

CS 223B Computer Vision Problem Set 3

Flexible Calibration of a Portable Structured Light System through Surface Plane

Robot localization method based on visual features and their geometric relationship

Object Tracking using Superpixel Confidence Map in Centroid Shifting Method

Auto-focusing Technique in a Projector-Camera System

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Optical Flow-Based Person Tracking by Multiple Cameras

Homographies and RANSAC

3D object recognition used by team robotto

Octree-Based Obstacle Representation and Registration for Real-Time

Learning the Three Factors of a Non-overlapping Multi-camera Network Topology

A Road Marking Extraction Method Using GPGPU

DEVELOPMENT OF A ROBUST IMAGE MOSAICKING METHOD FOR SMALL UNMANNED AERIAL VEHICLE

An Angle Estimation to Landmarks for Autonomous Satellite Navigation

Stereo and Epipolar geometry

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

Development of 3D Positioning Scheme by Integration of Multiple Wiimote IR Cameras

Estimation of Camera Pose with Respect to Terrestrial LiDAR Data

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES

Sensory Augmentation for Increased Awareness of Driving Environment

CS 231A Computer Vision (Fall 2012) Problem Set 3

An Improvement of the Occlusion Detection Performance in Sequential Images Using Optical Flow

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

Image-based Ship Pose Estimation for AR Sea Navigation

CRF Based Point Cloud Segmentation Jonathan Nation

Robotic Grasping Based on Efficient Tracking and Visual Servoing using Local Feature Descriptors

Software toolkit for evaluating infrared imaging seeker

Path Planning for a Robot Manipulator based on Probabilistic Roadmap and Reinforcement Learning

Globally Stabilized 3L Curve Fitting

A Method of Annotation Extraction from Paper Documents Using Alignment Based on Local Arrangements of Feature Points

Simultaneous Vanishing Point Detection and Camera Calibration from Single Images

Research on QR Code Image Pre-processing Algorithm under Complex Background

Sea-Based Infrared Scene Interpretation by Background Type Classification and Coastal Region Detection for Small Target Detection

Study on the Signboard Region Detection in Natural Image

Measurement of Pedestrian Groups Using Subtraction Stereo

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

Visual Attention Control by Sensor Space Segmentation for a Small Quadruped Robot based on Information Criterion

HOUGH TRANSFORM CS 6350 C V

AUTOMATIC EXTRACTION OF LARGE COMPLEX BUILDINGS USING LIDAR DATA AND DIGITAL MAPS

A novel point matching method for stereovision measurement using RANSAC affine transformation

A Robust Two Feature Points Based Depth Estimation Method 1)

Automatic Shadow Removal by Illuminance in HSV Color Space

DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION

Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives

Perceptual Quality Improvement of Stereoscopic Images

An efficient face recognition algorithm based on multi-kernel regularization learning

A threshold decision of the object image by using the smart tag

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot

Viewpoint Invariant Features from Single Images Using 3D Geometry

2. TARGET PHOTOS FOR ANALYSIS

Quality Guided Image Denoising for Low-Cost Fundus Imaging

CSE 527: Introduction to Computer Vision

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

Feature Detectors and Descriptors: Corners, Lines, etc.

Depth Propagation with Key-Frame Considering Movement on the Z-Axis

Chapter 3 Image Registration. Chapter 3 Image Registration

Calibration of Inertial Measurement Units Using Pendulum Motion

Recognizing Buildings in Urban Scene of Distant View ABSTRACT

A real-time Road Boundary Detection Algorithm Based on Driverless Cars Xuekui ZHU. , Meijuan GAO2, b, Shangnian LI3, c

EECS 442: Final Project

Exploitation of GPS-Control Points in low-contrast IR-imagery for homography estimation

Real-time target tracking using a Pan and Tilt platform

Vision par ordinateur

Practical Camera Auto-Calibration Based on Object Appearance and Motion for Traffic Scene Visual Surveillance

Towards the completion of assignment 1

Unmanned Vehicle Technology Researches for Outdoor Environments. *Ju-Jang Lee 1)

Segmentation and Tracking of Partial Planar Templates

An Approach for Real Time Moving Object Extraction based on Edge Region Determination

A Survey of Light Source Detection Methods

CS 664 Image Matching and Robust Fitting. Daniel Huttenlocher

INCREMENTAL DISPLACEMENT ESTIMATION METHOD FOR VISUALLY SERVOED PARIED STRUCTURED LIGHT SYSTEM (ViSP)

Region Based Image Fusion Using SVM

Self Lane Assignment Using Smart Mobile Camera For Intelligent GPS Navigation and Traffic Interpretation

Object Recognition with Invariant Features

Robust and Accurate Detection of Object Orientation and ID without Color Segmentation

Stereo-Based Obstacle Avoidance in Indoor Environments with Active Sensor Re-Calibration

VISION-BASED UAV FLIGHT CONTROL AND OBSTACLE AVOIDANCE. Zhihai He, Ram Venkataraman Iyer, and Phillip R. Chandler

Computing the relations among three views based on artificial neural network

Fitting. Fitting. Slides S. Lazebnik Harris Corners Pkwy, Charlotte, NC

Title: Vanishing Hull: A Geometric Concept for Vanishing Points Detection and Analysis

Tri-modal Human Body Segmentation

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

Face Recognition Using Vector Quantization Histogram and Support Vector Machine Classifier Rong-sheng LI, Fei-fei LEE *, Yan YAN and Qiu CHEN

RANSAC and some HOUGH transform

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Matching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Real-Time Document Image Retrieval for a 10 Million Pages Database with a Memory Efficient and Stability Improved LLAH

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Iris Recognition for Eyelash Detection Using Gabor Filter

Transcription:

Robust Horizontal Line Detection and Tracking in Occluded Environment for Infrared Cameras Sungho Kim 1, Soon Kwon 2, and Byungin Choi 3 1 LED-IT Fusion Technology Research Center and Department of Electronic Engineering, Yeungnam University, Gyeongsan, Gyeongbuk, Korea 2 Daegu Gyeongbuk Institute of Science and Technology, Daegu, Korea 3 Electro-Optics Laboratory, Samsung Thales Company, Yongin, Korea Abstract Detecting a horizontal line in an infrared image is an important component of automatic surveillance applications such as detecting ships, missiles on the horizon, unmanned aerial vehicle control, flight navigation, and port security. Most of the existing solutions for the problems only use single image to detect horizon line. Although this results in good accuracy for some images, it often fails to detect horizons in foggy, occluded environments. In this paper, we propose a novel horizon detection and tracking method that is robust to sensor vibrations and occlusions using an infrared camera for 24 hour running. An initial horizon is detected by a sensor geometry-based method for the robustness. Local horizon optimization and tracking produce stable horizons in occluded environments. The experimental results validate the feasibility of the proposed method in real infrared images. Keywords: IRST, horizontal line, detection, tracking, clutter 1. INTRODUCTION It is important to detect horizon for a number of applications such as sea-based infrared search and track (IRST) [1], vision-guided flight stability and control for micro air vehicles [2], surveillance for coast security [3]. Fig. 1 shows an infrared image example of sea-based environment for horizon detection. One of the previous approaches using only image processing methods performed very well in some cases [4]. Liu et al. presented an improved linear fit method, in which an effective preprocessing step is employed and those points fitted by line are reconfirmed [5]. Nevertheless, the improved linear fit method was proved poor applicability in cloud or sea clutter background. Having analyzed the weakness of the improved linear fit method, Yang et al. put forward a variance weighted information entropy (VWIE) based algorithm, but this algorithm is not fit for the complicated background infrared images [6]. Wen et al. proposed an Otsuąŕs threshold method, and it uses morphological opening and closing to smooth the segmented image but can only process the simple background infrared images [7]. Another method combining machine learning methods with morphol- Sky line Horion/coast line Fig. 1: Example of infrared image and location of horizon/cost line. ogy based operations, the Hough Transform and Expectation Maximization function to separate pixel distributions [8]. However, their performance in identifying a horizon suffers when images are complicated with clutter such as a very cloudy or foggy environment, an uneven horizon line, and varying lighting conditions. In addition, horizon detection can be fragile if there is strong occlusion by islands or some targets. In this paper, we present a robust horizon detection and tracking method in infrared sequences by introduction hybrid horizon initialization and outlier identification-based optimal tracking. In Section II, we introduce the overall horizon detection system. Geometry-based horizon initialization method is explained in Section III and optimal horizon tracking method is presented in Section IV. In In Section V, various performance evaluations and experimental results are explained by using real infrared sequences. We conclude and discuss this paper in Section VI. 2. Overview of the proposed system The proposed system consists of three components: sensor line of sight (LOS), horizon prediction, and horizon optimization in infrared video as shown in Fig. 2. We consider a sea-based IRST system as a test environment. Infrared search and track (IRST) systems are

Sensor LOS h α - Height ( ) - Elevation ( ) Horizon Prediction Provided by IRST system Signal Processing - Geometric analysis. Horizon position ( H prior ). Horizon tilt ( ) θ prior Horizon Optimization - Local horizon search. Horizon position ( H opt ). Horizon tilt ( ) θ opt Fig. 2: Proposed horizon initialization and optimization in infrared sequences. developed for autonomous searching, detection, acquisition, tracking and designation of potential incoming targets [9], [10]. Related research was actively conducted in the late 1980 s. In these applications, targets including missiles and ships are typically small and they appear in the sea background. An infrared camera is mounted on the top of a ship, which provides sensor height (h) and elevation (α) information. Based on these pose information, we can predict horizon location using geometric analysis. In the optimization block, occluded horizon is detected by RANSAC (RANdom SAmple Consensus) algorithm followed by local optimization. Horizon tracking is conducted on the inlier horizon with local search. Horizon is initialized statistically to adapt environmental changes. 3. Geometry-based horizon initialization 3.1 Geometrical model of horizon prediction In the sea-based IRST systems, infrared images consist of sky, horizon and sea regions. The horizontal region has a strong boundary line that divides heterogeneous backgrounds such as sky and sea region. The sea surface region contains many sun-glints and the ship targets are close to a sensor. So, an image segmentation scheme is necessary for a successful detection system. Image based region segmentation can be made possible by using clustering algorithms. However, this approach is unstable to environmental changes. The horizontal lines can be ambiguous when there is a strong sea fog. So, we use more stable approach based on the geometric analysis using the sensor pose information. As the pose information of an IRST sensor is recorded in the image header, we can estimate the horizontal line. The horizontal information is very important as it can provide a region segmentation cue. If we assume that an IR camera has height (h), elevation angle (α, assume 0 for easy analysis), and earth radius (R) then, we can depict the geometric relations as shown in Fig. 3 (a). The projected horizontal line in any image can be found by calculating the angle (θ H ) as in equation (1). In fact, a real IRST sensor can change the elevation angle, which changes the location of the horizontal line in the image domain. If the elevation angle of a camera is given as α and the field of view (FOV) of the sensor is given as β then, the angle of a sky region (θ sky ) is determined by the equation (2). If elevation angle (α) is smaller than θ H β/2 then, the sensor can observe only the sea region. So, the angle of the sky region (θ sky ) is 0. Similarly, we can also analyze other cases. The angle of the sea region (θ sea ) is determined as, θ sea = β θ sky. As the sky-sea region segmentation ratio is determined by tanθ sea /tanθ sky, the final horizontal line (H prior ) is calculated by using equation (3). If we assume the image height is 1,280 pixels, vertical field of view is 20 then, the sensor height is 20m, the elevation angle is 5, then the prediction horizontal line (H prior ) is located at 974 pixels as shown in Fig. 4. ( ) R θ H = cos 1 R + h 0 if α < θ H β/2 θ sky = β if α > θ H + β/2 α θ H + β/2 else H prior = ImageHeight (1) (2) tanθ sky tanθ sky + tanθ sea (3) 3.2 Analysis of horizon prediction error In ideal cases, the horizon prediction using above method can be accurate. However, there are several noise factors such as uncertainty of the sensor height caused by waves, uncertainty of the sensor elevation and roll angle after after mechanical stabilization. In the first case, we consider the noise of sensor height as 0 10m. 10m can be possible if there is strong hurricane. Fig. 5(a) shows the horizon prediction error or offset [pixel] according to the noise of sensor height. We can predict maximum 2 pixel offset of horizon location if the sensor height noise is 10m. In the second case, we consider the noise of sensor elevation angle. Normal mechanical stabilization can provide the angle error of ±0.005, which cause horizontal offset of 0.3 pixel. If we assume the noise as ±0.5 by considering 100 times stability margin, the horizon offset can be predicted

(a) Fig. 3: Geometry of sea-based IRST system. (a) Relationship between sensor height and horizontal line, (b) camera geometry with the field of view and elevation angle (α = 0), (c) approximated position of horizontal line when the elevation angle is α. (b) Fig. 5: Noise analysis of horizon prediction error: (a) Horizon offset caused by sensor height noise, (b) horizon offset caused by sensor elevation noise. Fig. 4: Synthetic horizon prediction using geometric analysis of sensor pose. as shown in Fig. 5(b). The maximal horizontal offset can be ± pixels. According to the results of noise analysis, sensor elevation noise is more critical than the sensor height noise. There can be additional sensor noise of roll stabilization. In normal roll stabilization error of 0.005, there is almost no horizontal tilt (θ prior ). If we consider the stabilization error of 0.5, the maximal horizontal tilt is 10 pixels in 1280 1080 image. By summarizing above noise analysis, we can conclude that the maximal horizon offset boundary is ±30 pixels including horizontal tilt. 4. Optimal horizon tracking From the previous sensor LOS, we can predict horizontal location with pre-defined search boundary. The next step is optimal horizon tracking in video sequence as shown in Fig. 6. Given an input frame, horixels (horizontal pixels) are extracted using column directional gradient and max

Input frame Horixel Extraction - Column directional gradient filtering within local search space (use sampling interval) - Locating horixel (horizon pixel) by max gradient Inlier index Initialization mode Tracking mode Inlier Detection - RANSAC (RANdom SAmple Consensus). Robust to outliers. Identifying inlier index Total Least Square Optimization - SVD-based line fitting for inlier index. Closed form solution. Fast and stable Extracted horixels Fig. 6: Horizon optimization and tracking flow in infrared sequence. selection. Then, inlier horixels are identified using robust line fitting method of RANSAC [11]. The important role of RANSAC is to find inlier indices of true horixels. Based on the inlier index, total least square optimization can detect final horizon stably. Since inlier horixels are identified through the process, horizon tracking is conducted using horixel extraction and optimization. Inlier detection block is activated in the beginning and statistically to adapt environmental changes. Horizon prediction using sensor LOS Fig. 7: Example of the predicted horizon and detected horixels. 4.1 Horixel extraction Given a predicted horizon as shown in Fig. 7(dotted blue line), a search boundary is set. Then, sampling interval is defined to reduce the computational complexity. For each sample position, column direction gradient filter is conducted using derivative of Gaussian kernel. Then horixels close to a predicted horizon are extracted by max selection. Fig. 7(dotted black line) shows the extracted horixels. 4.2 Inlier detection using RANSAC In a sea environment, horizon is frequently occluded by islands, coasts, and cloud. So, we need a robust horizon estimation method such as RANSAC. Basically, RANSAC algorithm picks two horixels and predict horizon line. Then, it checks line fitting and inliers. After a number of iterations, horizon line parameter is selected that has largest inliers. Fig. 8 shows the inlier detection results using a RANSAC method. Note that inliers and outliers are classified almost correctly. The inlier indices are used in the optimization of line fitting and horizon tracking. outlier inlier Fig. 8: Example of inlier horixels found by a RANSAC.

Optimized horizontal line Set1: Occluded by cloud Set 2: Occluded by near island Set3: Occluded by near/remote island Set4: Occluded by near coast Prediction Initialization Optimization Fig. 9: Example of horizontal line optimization in occluded environments. 4.3 SVD-based optimization and tracking The last step is to refine horizon parameters using total least square fitting given a set of inlier horixels. The fitting process is as follows. First, we normalize inlier horixels and then conduct a singular value decomposition (SVD) [12]. Horizon direction is selected by an eigenvector with the smallest eigenvalue. Figure 9 shows the horizon optimization results for an image occluded by near island and remote island. Horizontal area is enlarged to show the results. Horizon tracking is done by the horixel extraction and SVDbased optimization with the inlier indices. RANSAC-based initialization is activated statistically. 5. Experimental results We prepared four kinds of test sequences as shown in Fig. 10 to validate the robustness of the proposed method. The Set 1 is remote sea images occluded by strong cloud. Horizons in Detected horizon is decided as correct if the line fitting error is the Set 2 are occluded by near island which occupies 1/3 of the horizon length. Set 3 has a near islands and a remote island. The last Set 4 has near coast in which boats and buildings occlude horizons. A detected horizon is declared as correct detection if a line fitting error is within 1 pixel in average. The ground truth of horizon location is prepared by manual inspection. The original test sets has almost no sensor noise. So, we add artificial sensor tilt noise by ±0.5 and horizon location noise by ±3.0 pixels generated by uniform for that range. Table 1 summarizes the overall experimental results. Our method detected horizons correctly for the noiseless sequence data. In the case of noisy data, only one frame of Set 4 shows incorrect horizon detection. Fig. 11, 12, 13, and 14 show the sampled horizontal detection results for the noise added sequences. Dotted blue lines denotes horizon prediction by sensor LOS, solid black or white line denotes optimal horizon, and magenta dots denote inlier horixels extracted by RANSAC. Note that horizon lines are detected robustly regardless to occlusion types under sensor noise. Fig. 10: Composition of the test database. Table 1: Detection rate (DR) of horizon for the noiseless data and noisy data. Test set Set 1 Set 2 Set 3 Set 4 DR w/o noise [%] 100 (20/20) 100 (30/30) DR with noise 100 (20/20) 97 (29/30) 6. Conclusions In this paper, we present a robust horizon detection and tracking method using sensor geometry and optimization. Through the analysis of sensor geometry, we can predict the search range of horizon. Inlier indices are found by RANSAC and these indices are utilized in the SVD-based line fitting and tracking. Experimental results for the various infrared sequences validate the robustness of the proposed method. Acknowledgement This research was supported by the DGIST R&D Program (12-BD-0202) and by a grant-in-aid of Samsung Thales. It was also supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (No. 2012-0003252).

Fig. 11: Examples of horizon detection for the noise added test Set 1. Fig. 12: Examples of horizon detection for the noise added test Set 2. References [1] S. Kim and J. Lee, Scale invariant small target detection by optimizing signal-to-clutter ratio in heterogeneous background for infrared search and track, Pattern Recognition, vol. 45, no. 1, pp. 393 406, 2012. [Online]. Available: http://dx.doi.org/10.1016/j.patcog.2011.06.009 [2] S. M. Ettinger, M. C. Nechyba, P. G. Ifju, and M. Waszak, Visionguided flight stability and control for micro air vehicles, Advanced Robotics, vol. 17, no. 7, pp. 617 640, 2003. [3] S. P. van den Broek, H. Bouma, M. A. Degache, and G. Burghouts, Discrimination of classes of ships for aided recognition in a coastal environment, in Proc. of SPIE, vol. 7335, 2009, p. 73350W. [4] T. G. McGee, R. Sengupta, and J. K. Hedrick, Obstacle detection for small autonomous aircraft using sky segmentation, in ICRA. IEEE, 2005, pp. 4679 4684. [5] S. tao Liu, T. sheng Shen, Y. li Han, and X. dong Zhou, Research on locating the horizontal region of ship target, Infrared and Laser Engineering, vol. 33, no. 1, pp. 51 53, 2003. [6] L. Yang, Y. Zhou, and L. Chen, Variance wie based infrared images processing, Electron. Lett., vol. 42, no. 15, pp. 338 340, 2006. [7] P. zhi Wen, Z. lin Shi, and H. bin Yu, Automatic detection method of ir small target in complex sea background, Infrared and Laser Engineering, vol. 32, no. 6, pp. 590 593, 2003. [8] S. Todorivic and M. Nechyba, Sky/ground modelling for autonomous mav flight, in ICRA. IEEE, 2003, pp. 4679 4684. [9] S. B. Campana, The infrared and electro-optical systems handbook, SPIE Optical Engineering Press, vol. 5, no. 4, 1993. [10] A. N. de Jong, IRST and perspective, in Proc. of SPIE, vol. 2552, 1995, pp. 206 213. [11] R. I. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. Cambridge University Press, ISBN: 0521540518, 2004. [12] R. Hanson and M. Norris, Analysis of measurements based on the singular value decomposition, SIAM J. Sci. Stat. Comput., vol. 2, pp. 363 373, 1981.

Fig. 13: Examples of horizon detection for the noise added test Set 3. Fig. 14: Examples of horizon detection for the noise added test Set 4.