An Improvement of the Occlusion Detection Performance in Sequential Images Using Optical Flow

Similar documents
Depth Propagation with Key-Frame Considering Movement on the Z-Axis

Robot localization method based on visual features and their geometric relationship

A Noise-Robust and Adaptive Image Segmentation Method based on Splitting and Merging method

Post-Classification Change Detection of High Resolution Satellite Images Using AdaBoost Classifier

Study on the Signboard Region Detection in Natural Image

Segmentation-based Disparity Plane Fitting using PSO

A Practical Camera Calibration System on Mobile Phones

A Survey of Light Source Detection Methods

Towards the completion of assignment 1

Pattern Feature Detection for Camera Calibration Using Circular Sample

Local Image Registration: An Adaptive Filtering Framework

Specular Reflection Separation using Dark Channel Prior

Image Classification using Fast Learning Convolutional Neural Networks

Object Tracking using Superpixel Confidence Map in Centroid Shifting Method

A Study on Development of Azimuth Angle Tracking Algorithm for Tracking-type Floating Photovoltaic System

Rectification of distorted elemental image array using four markers in three-dimensional integral imaging

Detection of a Hand Holding a Cellular Phone Using Multiple Image Features

Applicability Estimation of Mobile Mapping. System for Road Management

Notes 9: Optical Flow

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

Trajectory Planning for Mobile Robots with Considering Velocity Constraints on Xenomai

Visual Sensor-Based Measurement for Deformable Peg-in-Hole Tasks

Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information

Using temporal seeding to constrain the disparity search range in stereo matching

Video Processing for Judicial Applications

LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION

Image Segmentation for Image Object Extraction

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg

A Video Watermarking Algorithm Based on the Human Visual System Properties

Detecting Multiple Symmetries with Extended SIFT

Stereo Vision Image Processing Strategy for Moving Object Detecting

A Study on Object Tracking Signal Generation of Pan, Tilt, and Zoom Data

Car License Plate Detection Based on Line Segments

Representing Moving Images with Layers. J. Y. Wang and E. H. Adelson MIT Media Lab

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1

Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera

Efficient Block Matching Algorithm for Motion Estimation

Motion Detection Algorithm

A Personal Information Retrieval System in a Web Environment

Research on Autonomic Control System Connection Goal-model and Fault-tree

The Study of Genetic Algorithm-based Task Scheduling for Cloud Computing

Room Reconstruction from a Single Spherical Image by Higher-order Energy Minimization

Proposal of Quadruped Rigging Method by Using Auxiliary Joint

Parallel Processing of Multimedia Data in a Heterogeneous Computing Environment

Structural Analysis of Aerial Photographs (HB47 Computer Vision: Assignment)

Perceptual Quality Improvement of Stereoscopic Images

Character Segmentation and Recognition Algorithm of Text Region in Steel Images

AUTOMATIC OBJECT DETECTION IN VIDEO SEQUENCES WITH CAMERA IN MOTION. Ninad Thakoor, Jean Gao and Huamei Chen

Rectification of elemental image set and extraction of lens lattice by projective image transformation in integral imaging

A Spatial Point Pattern Analysis to Recognize Fail Bit Patterns in Semiconductor Manufacturing

AUTOMATIC EXTRACTION OF LARGE COMPLEX BUILDINGS USING LIDAR DATA AND DIGITAL MAPS

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Determination of the Parameter for Transformation of Local Geodetic System to the World Geodetic System using GNSS

A Video Optimization Framework for Tracking Teachers in the Classroom

Measurement of Pedestrian Groups Using Subtraction Stereo

Department of Game Mobile Contents, Keimyung University, Daemyung3-Dong Nam-Gu, Daegu , Korea

Detection of a Single Hand Shape in the Foreground of Still Images

Lane Detection using Fuzzy C-Means Clustering

Triangular Mesh Segmentation Based On Surface Normal

On-line, Incremental Learning of a Robust Active Shape Model

hsgm: Hierarchical Pyramid Based Stereo Matching Algorithm

EM algorithm with GMM and Naive Bayesian to Implement Missing Values

IRIS SEGMENTATION OF NON-IDEAL IMAGES

Crowd Event Recognition Using HOG Tracker

Fast 3D Mean Shift Filter for CT Images

Power Control Performance Evaluation for Efficiency in D2D Cellular Network

Video Inter-frame Forgery Identification Based on Optical Flow Consistency

Task analysis based on observing hands and objects by vision

Model-Based Human Motion Capture from Monocular Video Sequences

EE795: Computer Vision and Intelligent Systems

Real-time target tracking using a Pan and Tilt platform

Face Alignment Under Various Poses and Expressions

DEVELOPMENT OF A ROBUST IMAGE MOSAICKING METHOD FOR SMALL UNMANNED AERIAL VEHICLE

An Approach for Reduction of Rain Streaks from a Single Image

Design of Good Bibimbap Restaurant Recommendation System Using TCA based on BigData

Integration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model

Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences

Feature Tracking and Optical Flow

Vehicle Detection Method using Haar-like Feature on Real Time System

Fingerprint Mosaicking by Rolling with Sliding

Scanner Parameter Estimation Using Bilevel Scans of Star Charts

Efficient Stereo Image Rectification Method Using Horizontal Baseline

Blood vessel tracking in retinal images

Feature Tracking and Optical Flow

Research on QR Code Image Pre-processing Algorithm under Complex Background

Scalable Hierarchical Summarization of News Using Fidelity in MPEG-7 Description Scheme

1-2 Feature-Based Image Mosaicing

LOCALIZATION OF FACIAL REGIONS AND FEATURES IN COLOR IMAGES. Karin Sobottka Ioannis Pitas

Octree-Based Obstacle Representation and Registration for Real-Time

K-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors

Comparison Between The Optical Flow Computational Techniques

A Study of Medical Image Analysis System

Similarity Image Retrieval System Using Hierarchical Classification

Impact of Intensity Edge Map on Segmentation of Noisy Range Images

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera

Accelerating Pattern Matching or HowMuchCanYouSlide?

Subpixel Corner Detection Using Spatial Moment 1)

Transcription:

, pp.247-251 http://dx.doi.org/10.14257/astl.2015.99.58 An Improvement of the Occlusion Detection Performance in Sequential Images Using Optical Flow Jin Woo Choi 1, Jae Seoung Kim 2, Taeg Kuen Whangbo 3 1 Culture Technology Institute, Gachon University, 1342 Seongnamdaero, Sujeong-gu, Seongnam-si, Gyeonggi-do, 461-701, Korea cjw49@paran.com 2, 3 Department of Computer Science, Gachon University 2 rememberguy@hanmail.net, 3 tkwhangbo@gachon.ac.kr Abstract: During the process of acquiring motion vectors through existing optical flow, problems occur due to occlusion. To solve this, the flow value obtained through optical flow is used to determine the occlusion candidate region. In addition, a bilateral filter, which is a blurring filter that preserves the edge data of an object, is used to repeatedly perform the flow compensation process to more accurately detect occlusions. Keywords: Occlusion Detection, Optical Flow 1 Introduction We are designing an algorithm that automatically produced a depth map using motion vector and vanishing point information, but it is limited by the low accuracy of motion vectors due to an occlusion region in the optical flow process that acquired motion vectors. The objective of the present study is to more accurately detect the occlusion region from current and next frames compared to our previous approach, thereby creating accurate motion vector values in the optical flow process through flow value compensation in the occlusion region. We examined the presence of occlusion around the object boundary through object segmentation and optical flow and herein propose a method of increasing the accuracy of occlusion region detection through postprocessing of the occlusion region. 2 Proposed Method We set the following objectives to solve the problems described above: Occlusion is not tested in all regions in current and next frames but occlusion candidates are narrowed into a boundary region of object where most occlusion occurs, thereby improving processing speed and preventing noise generation. ISSN: 2287-1233 ASTL Copyright 2015 SERSC

If the occlusion region is determined based on pixels, it cannot detect occlusion or noise, which causes interruption. Therefore, the occlusion region is objectified through postprocessing, and nondetected occlusion points are filled later. Fig.1. Flow chart of the proposed algorithm 2.1 Watershed algorithm-used object segmentation Because most occlusion regions occur around boundaries of objects in images rather than occurring inside independent objects, occlusions are not examined in all regions but narrowed to surrounding areas of object boundaries, thereby improving processing speed and preventing noise generation. To achieve this, an object segmentation process is needed, and occlusion candidates are limited to the boundary areas of the segmented object. In this study, we used the rainfall-mode watershed algorithm for object segmentation. Fig.2. Method flow Example of marker information used in current and next frames Because our proposed algorithm is characterized by finding occlusion regions using two images of current and next frames, the same segmentation must be done in both frames. To do this, the same marker information and object information that was segmented in the previous frame is used in the next frame as marker information. 248 Copyright 2015 SERSC

A marker uses information of an object boundary segmented in the previous frame. Note that since objects can be moved in images between frames, boundary information is not used directly as a marker but is scaled to a certain ratio based on the object center point. Fig.2 shows a schematic diagram of how to use marker information in current and next frames. Since boundaries b 1, b 2 and b 3 are extracted per object as a result of the watershed in the previous frame, marker information m 1, m 2 and m 3 can be acquired that is scaled down by a certain ratio k based on the center points c 1, c 2 and c 3 and their respective boundary information. The acquired marker information m 1, m 2 and m 3 is then used as a marker in the next frame, and nearly the same boundary information b 1`, b 2` and b 3` as the previous frame is acquired. 2.2 Occlusion candidate region determination In Section 2.1, the object segmentation process was performed using the watershed algorithm in consideration of current and next frames, and objects in videos were separated. Since objects that include occlusion regions have no consistent orientation, the orientation of each object is examined, and objects with inconsistent orientation are classified as objects that have an occlusion region. If the objects that have an occlusion region are adjacent to each other, the boundaries of adjacent areas are determined as occlusion candidate regions. We computed the consistency of motion vectors between current and next frames of pixels that compose each boundary by using optical flow. Since objects that have occlusion regions experience pixel information loss between current and next frames, values of motion vectors acquired through optical flow are not consistent [3][4]. Assuming that an object s boundary is, then angles of vectors obtained through execution of optical flow at points p 1, p 2,, p of a certain gap can be 1, 2,, Assuming that variates of vector angles 1, 2,, are d 1, d 2,, d, if the mean value of the variates is larger than the threshold, an occlusion region in that boundary occurs. This can be expressed as shown in Equation (1). The threshold value is determined adaptively as per the video. f 1 e o d, occ on cand da e (1) 2.3 Generation of the occlusion point map If occlusion region candidate boundaries are determined, each surrounding pixel around the candidate s boundaries should be examined to determine if occlusion is present or not. An image of the final examined results of occlusion per pixel is called an occlusion point map. To create an occlusion point map, the difference in intensity value per pixel around the occlusion region candidate boundaries is compared between current and next frames and each pixel's result is filled in the occlusion point map according to whether the occlusion is present or not. Copyright 2015 SERSC 249

To compare intensity values per pixel, we used the Pyramid Lucas-Kanade Optical Flow method with 5 5 pixels around the occlusion region candidate boundary. The error rate was reduced by referring information from surrounding pixels using a 3 3 block size for each pixel [1]. The intensity comparison algorithm of pixels is shown in Equation (5) [5] as, ) { f ( 1, ) 2(, ),, ))) 2 e o d, cc on po n o e e, no occ on po n (5) In the above equation,, ) and, ) refer to and components, respectively, of motion vector calculated through optical flow at a pixel position, ) in the current frame. If the difference in pixel intensity value between the same pixels in current and next frames is larger than the threshold, the corresponding pixel is regarded as an occlusion point [2]. 3 Experimental Results and Analysis In the object segmentation process, which was the first process of the proposed algorithm, the watershed algorithm was employed and information from the previous frame was used as a marker in the next frame. This provided an accurate result because of the requirement that current and next frames should have the same segmentation. In addition, occlusion regions were not examined in all areas of an image but were limited to boundaries of specific objects, thereby reducing computation. To narrow the occlusion candidate region even more, the abutted boundaries of white objects that were adjacent to the same occlusion region were selected as final candidates where occlusion regions were present. (a) (b) (c) Fig.3. (a) Resulting image of determination of whether objects have occlusion regions or not (b) Final occlusion boundary (c) Occlusion point map Since final occlusion boundaries are acquired, pixels in a certain distance from the occlusion boundaries are examined as to whether occlusion is present or not, thereby completing an occlusion point map that represents occlusion per pixel. Fig.3 shows an example of this result. 250 Copyright 2015 SERSC

4 Conclusion In this paper, the following solutions were proposed to solve the problems in existing studies: By limiting the search of the occlusion region to specific boundaries where the occlusion region is likely to be present rather than identifying it from all regions in an image, the processing speed is reduced, thereby facilitating the application to a real-time auto-converting process. By reducing the error in which the occlusion region is detected even in areas other than object boundaries where the occlusion region occurs, the accuracy of occlusion region detection is increased. Through postprocessing, an occlusion region that cannot be detected in the occlusion point map is detected as a form of a plane, thereby increasing accuracy. In future studies, occlusion regions that are generated at the outer lines of an object as well as inside an object and occlusion regions at the boundary areas of an image should be detected. Furthermore, if occlusions in images consisting of a curve shape or a combination of straight and curve lines can be detected with objectification through labeling rather than objectification of a clustered occlusion point map through Hough transform, this would increase the performance of occlusion region detection in various image shapes. Acknowledgments. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2013R1A1A2057851) References 1. Lucas, B., Kanade, T.: An Iterative Image Registration Technique with an Application to Stereo Vision. In: Proc. 7th International Joint Conference on Artificial Intelligence, pp. 674--679. Vancouver, Canada (1981) 2. Ince, S., Konrad, J.: Occlusion-Aware Optical Flow Estimation. IEEE Transactions on Image Processing 17, 1443--1451 (2008) 3. Cho, S. H.: An Improvement in Occlusion Detection via Optical Flow Estimation. Master s Thesis, Korea Advanced Institute of Science and Technology (2009) 4. Sun, J., Li, Y., Kang, S. B., Shum, H.-Y.: Symmetric Stereo Matching for Occlusion Handling. In: CVPR 2005, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 399--406. San Diego (2005) 5. Xiao, J., Cheng, H., Sawhney, H., Rao, C., Isnardi, M.: Bilateral Filtering-Based Optical Flow Estimation with Occlusion Detection. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006, 9th European Conference on Computer Vision. LNCS, vol. 3951, pp. 211--224. Springer, Heidelberg (2006) Copyright 2015 SERSC 251