AN ADAPTIVE MESH METHOD FOR OBJECT TRACKING

Similar documents
COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

Visual Tracking (1) Feature Point Tracking and Block Matching

Moving Object Tracking in Video Using MATLAB

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1

arxiv: v1 [cs.cv] 2 May 2016

Fast Optical Flow Using Cross Correlation and Shortest-Path Techniques

SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD

Local Image Registration: An Adaptive Filtering Framework

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Dominant plane detection using optical flow and Independent Component Analysis

AUDIO-VISUAL services over very low bit rate channels,

Efficient Block Matching Algorithm for Motion Estimation

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation

Optical Flow. Adriana Bocoi and Elena Pelican. 1. Introduction

Using temporal seeding to constrain the disparity search range in stereo matching

Towards the completion of assignment 1

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

Chapter 3 Image Registration. Chapter 3 Image Registration

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Notes 9: Optical Flow

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

VC 11/12 T11 Optical Flow

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Representing Moving Images with Layers. J. Y. Wang and E. H. Adelson MIT Media Lab

HOUGH TRANSFORM CS 6350 C V

Using Optical Flow for Stabilizing Image Sequences. Peter O Donovan

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

A Robust Method of Facial Feature Tracking for Moving Images

Fast Vehicle Detection and Counting Using Background Subtraction Technique and Prewitt Edge Detection

Final Review CMSC 733 Fall 2014

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

A New Algorithm for Shape Detection

Visual Tracking (1) Pixel-intensity-based methods

Segmentation and Tracking of Partial Planar Templates

Image Coding with Active Appearance Models

Video Processing for Judicial Applications

Registration of Dynamic Range Images

EE795: Computer Vision and Intelligent Systems

A MOTION MODEL BASED VIDEO STABILISATION ALGORITHM

Lecture 16: Computer Vision

Lecture 16: Computer Vision

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Motion Detection Algorithm

Adaptive Multi-Stage 2D Image Motion Field Estimation

Occlusion Robust Multi-Camera Face Tracking

Problem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1

A Reliable FPGA-based Real-time Optical-flow Estimation

Optic Flow and Basics Towards Horn-Schunck 1

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore

AUTOMATIC OBJECT DETECTION IN VIDEO SEQUENCES WITH CAMERA IN MOTION. Ninad Thakoor, Jean Gao and Huamei Chen

Comparison Between The Optical Flow Computational Techniques

Open Access Moving Target Tracking Algorithm Based on Improved Optical Flow Technology

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS

LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION

1-2 Feature-Based Image Mosaicing

Introduction to Medical Imaging (5XSA0) Module 5

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Dynamic Obstacle Detection Based on Background Compensation in Robot s Movement Space

IMPLEMENTATION OF SPATIAL FUZZY CLUSTERING IN DETECTING LIP ON COLOR IMAGES

A Street Scene Surveillance System for Moving Object Detection, Tracking and Classification

Image retrieval based on region shape similarity

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923

Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han

CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION

Fast Natural Feature Tracking for Mobile Augmented Reality Applications

An actor-critic reinforcement learning controller for a 2-DOF ball-balancer

Comment on Numerical shape from shading and occluding boundaries

Motion Estimation. There are three main types (or applications) of motion estimation:

CS-465 Computer Vision

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

Accelerating Pattern Matching or HowMuchCanYouSlide?

CPSC 425: Computer Vision

Real-time target tracking using a Pan and Tilt platform

A Robust Two Feature Points Based Depth Estimation Method 1)

Multiple Object Tracking using Kalman Filter and Optical Flow

CHAPTER 5 MOTION DETECTION AND ANALYSIS

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg

03 Vector Graphics. Multimedia Systems. 2D and 3D Graphics, Transformations

Matching of Stereo Curves - A Closed Form Solution

Multi-Camera Calibration, Object Tracking and Query Generation

arxiv: v1 [cs.cv] 28 Sep 2018

Tracking facial features using low resolution and low fps cameras under variable light conditions

Dense Image-based Motion Estimation Algorithms & Optical Flow

Structured light 3D reconstruction

Robust Camera Pan and Zoom Change Detection Using Optical Flow

Optical Flow Estimation versus Motion Estimation

Detecting and Identifying Moving Objects in Real-Time

Stereo Vision. MAN-522 Computer Vision

Vehicle Logo Recognition using Image Matching and Textural Features

Noise Reduction in Image Sequences using an Effective Fuzzy Algorithm

Digital Image Processing COSC 6380/4393

Department of Electrical Engineering, Keio University Hiyoshi Kouhoku-ku Yokohama 223, Japan

A Vision System for Automatic State Determination of Grid Based Board Games

Transcription:

AN ADAPTIVE MESH METHOD FOR OBJECT TRACKING Mahdi Koohi 1 and Abbas Shakery 2 1 Department of Computer Engineering, Islamic Azad University, Shahr-e-Qods Branch,Tehran,Iran m.kohy@yahoo.com 2 Department of Electronic Engineering, Azad University, SouthTehran Branch, Iran a.shakery@gmail.com ABSTRACT Object tracking is one of the most important problems in modern visual systems and researches are continuing their studies in this field. A suitable tracking method should not only be able to recognize and track the related object in continuous frames, but should also provide a reliable and efficient reaction against the phenomena disturbing tracking process including performance efficiency in real-time applications. In this article, an effective mesh-based method is introduced as a suitable tracking method in continuous frames. Also, its preference and limitation is discussed. KEYWORDS Tracking, Adaptive Mesh, Polygon Approximation. 1. INTRODUCTION Object tracking has been regarded as one of the most important problem, that could not be solved completely and researches are being continued in this field. A desirable tracking method should be able not only to diagnose and track referred object in consequent frame, but also should include considerable reliability and power along with implantable capability in real time practices. Nowadays a new idea regarding "mesh method" has been proposed as utilizing an adaptive mesh structure instead of a uniform mesh. In other words, instead of following a special complex of the nodes in each frame, in a distance between two consequent frames, nodes which should be followed in the later frame are selected adaptively. Such process will provide more power and reliability of tracking process against minor or major occlusions, appearing new objects in image, or exiting objects from the image [1], [2]. Although such idea is new and controversial in some aspects, it is evident that such method will complicate the tracking process. But, a perspective beyond this issue shows that such method may effectively defeat the above challenges. In this article, such a method is introduced in more details because of the lack of enough reference; algorithmic results have performed on artificial and some real images. 2. DISTURBING FACTORS OF TRACKING PROCESS Among the phenomena which disturb tracking process are listed as: 1) Lack of camera instability regarding to objects causing image magnification, minimization, or rotation. 2) Light change of image environment. DOI : 10.5121/ijp2p.2011.2201 1

3) Lack of object movement in a straight direction. 4) Lack of objects with constant speed. 5) Minimal or general occlusion of the related object with other objects. 3. ADAPTIVE MESH DESIGN Figure 1 shows an object moving in a static background. Figure. 1. Movement of two moving borders following an object movement. Assuming an ellipsoid object is transferring to the right side results in two moving borders.the border moving to the right is Background To Be Covered (BTBC) and Uncovered Background (UB) is the border moving to the left showing the background movement [2,7]. The structure of an adaptive mesh structure is defined as follows: 1) In current frame, no node should be put in BTBC area because these nodes will be occluded in the next frame and we cannot calculate any moving vector for them. 2) We may relate more than one moving vector to nodes which are put on objects borders, because they are set between two or several objects or between the object and common back ground. For example, if a node is located in the border of the object and the background, two moving vectors should be referred to it; one for modeling the object movement and another for modeling the background movement. 3) The structure of intra area mesh or UB areas in the next frame should be introduced for following up the new areas. For exact modeling of such areas with a mesh model, a complex of nodes should be set on their borders, and these nodes are concluded through polygon estimation. Because of the lack of enough attention of moving vectors around such nodes, BTBC and UB cannot be estimated exactly and a motion compensation method is used for compensating such estimation errors. First the changed area in the current frame is estimated by mesh designing algorithms, and the movement of frame K is compensated form frame K+1 by estimating the movement field. In this process, motion failure (MF) is estimated instead of UB area. The reason for this is to adopt mesh structure with new objects in the image. The MF area in next frame is estimated with a similar method of BTBC area finding method, from the frame differences as a function of parametric movement field storage and coding the colors or light intensity of this area as a data complex. Then, the mesh inside the MF area is reintroduced for following up such areas. MF area includes objects appearing currently and objects occluding previous objects followed in previous frames. 2

4. MOVING FIELD ESTIMATION For estimating moving field density, methods on light share equations such as Lucas Kanade [3] or Horn Schunck [4] are used. Such methods create a smoother moving field than other methods such as block matching. In [8,9,10] many existing algorithms speeds (computing optical flow for the diverging trees sequence) are compared and compiled in a tabular form. It is noteworthy that the crystallizing parameters and smoothing levels in such methods are important (if the movement field is over smoothed, following the movement on small regions will be impossible). 5. OPTICAL FLOW EQUATION The light share equation for each node assuming the constant light level of that node in a distance between two consequent frames is: E x dx dt dx u = dt E dy E + + = 0 y dt t de dy = 0 v = + = 0 dt dt Ex u + Ey v E t (1) Where E denotes the light intensity of node (x, y) and E x, E y, and E t are componential derivatives of light in the x and y directions based on the time in the related node. Situations which are used for light share calculations are related to the movement field structure and characters. One of these conditions is the smooth condition of the movement field which is implemented by minimization of the quadrant amount of the gradient of the "speed" elements: u x 2 + u y 2 v x 2 v + y 2 (2) Each of the above situations will provide a nonlinear equation. The light share equation is based on the constant level of the light in a node in a distance between two frames. But, any light intensity change may alert the light of a node. For considering noise effects or light change of an image and for preventing the loss of application of light share equation, such linear equation should be solved by making a system with two equation, one linear and one nonlinear. But, when all objects in a scene move together or objects are fixed and the camera is moving in a plate in parallel with its own image plate (without any limitation in the camera bending movement), then the light share equation will change to a constant optical flow, and only a quantity of U and V should be calculated for all nodes of the image and this is done by minimizing the total sum of optical flow: 3

(3) 6. DEFINING BTBC REGION Region BTBC is referred to total intra frame pixels which are covered in the next frames and its borders are estimated by threshold displacement between two consequent frames. 7. ALGORITHMIC STEPS OF BTBC REGION The procedure is as follows: 1) Calculating the difference between two consequent frame and recognizing the changed region by processing methods as: a) Filtering by median filter (5 5). b) Three consequent implementations of morphologic closing (size 3 3) and then three consequent implementations of morphologic opening with the same size. c) Eliminating regions less than a defined size. 2) Estimating the movement field density from frame k to frame k+1. 3) Movement compensation of frame k of frame k+1 for virtual estimation of I ~ k by the density of the estimated movement field. 4) If the difference between Frame k showing with I k (x,y) with the compensated frame K ~ [ I K ( x, y)] is more than a predefined threshold, then the pixel (x, y) is labeled as occluded pixel as: DFDx, y) = ( x, y) ~ ( x, y) > ( x, y ) (4) I T ( k k 1 I Thresholding may lead to a complex of little pixels eliminating the utilization of filtering and applying the morphological filtering and covering the little holes results in smooth and homogenous regions. The size of eliminated pixels or covered holes depends on the filter size. In Figure 2, the definition result of BTBC region is shown. In the first row a quadrant object is moving to the right side having occlusion in second frame and in the second position, the quadrant object moves to the right without occlusion in the second frame [2]. 4

Figure 2. Results of BTBC definition. 8. POLYGON APPROXIMATION The border of different regions during the adaptive tracking process should be modeled with limited number of nodes in the related region. Such model should provide the closest approximation with near regional position. The most common models are among polygons and B- Spline models. Polygon approximation is used in adaptive mesh tracking process because of its simple structure. Figure 3. Polygon algorithm. 9. POLYGON ALGORITHM STEPS This algorithm includes the following steps: 1) Define all pixels located on the related region border. 2) Find a pair of pixel among them with maximum distance with each other and label them by 1 and 2. The connecting line between them is called the "main axis". 5

3) Find two nodes with highest vertical distance with the main axis in two sides of the axis as 3 and 4. These four nodes are called "initial vertices". Vertices 3 and 4 should be in two opposite directions of the main axis. 4) Draw a straight line between two vertices (such as 1-3 in Figure 3). 5) Repeat step 4 for 3 other parts. The above nodes are used as the building nodes for a two-dimensional structure. Such polygon algorithm is based on converge regions and also approximates concave region as converge positions; which is enough for mesh structure. Such algorithm may be expanded for more than 128 nodes. Figure 4 shows the circle that is designed for 4 and 15 nodes. Figure 4. Results of polygon approximation. 10. SELECTING NODES FOR DIMENSIONAL MESH STRUCTURE HIERARCHICAL MESH Such tracking process is as an effective algorithm for designing two-dimensional mesh as a concept-based manner. It includes a non smooth selection process of a complex of nodes followed by a triangle creating process on selected nodes. The basic principle is to select nodes in such a way that mesh cells borders are adopted with the object borders and the density of nodes is suitable with local activity level. Figure 5 shows the flow chart of the selection of mesh structure nodes and Figure 6 shows the final results. The above method needs a high level of calculation and in this reason it is not performed with a significant speed. Another limitation of this method arises from the lack of occlusion by this method on all inter nodal areas meaning that we cannot prove such method may make an occlusion on all intermodal regions by triangle cells. It is also practically observed that, in spite of complete performance of the process, some position of inter nodal regions, may remain without triangle occlusion. 6

Figure 5. Flowchart of selecting two dimensional mesh structure nodes. Such limitation caused that a suggested method of simple and fast triangulation method is studied. The strong point of such suggested the method in comparison with the previous method is that we can prove such method will surely make occlusion on all inter nodal regions. Such algorithm uses a method called polygon star making. 12. STEPS OF THE SUGGESTED ALGORITHM The following steps are suggested in this star polygon algorithm: 1) Consider M node. Find a neighbor with distance which is located exactly on the top of M node. 2) Find another neighbor with lowest distance located exactly under the M node. 3) Consider a circle with M center. Store all nodes located in the right half plate of node M in one configuration.review all nodes. If two or several nodes are located on one equal radian, the node with lowest distance with M should be conserved and other nodes on that radian should be eliminated. Repeat this step until there isn t remained any two nodes in the related configuration. Then arrange the remained nodes in the configuration based on the connecting line slope among them and the node M From the little to big ones. 4) Repeat step 3 for the nodes located at the left half plate of node M. 5) Now connect the node M as star or radian shape to all nodes recognized as its neighbors. It is clear that on each of this star, there exists only one node. 6) Sweep star lines in clockwise manner starting from the 12 o'clock site and make a triangle with each consequent line until the polygon is completed around node M. 7

Repeat this process for all nodes. It is evident from the above process that after making a polygon for one node, since other nodes polygons are expanded from its near, the triangle making process will cover all inter nodes regions. They are shown in figure 7 and figure 8. Figure 6. Results of the proposed algorithm of selecting the dimensional mesh structure nodes. Figure7. Some samples of the results of the suggested Triangulation method for a complex of selected nodes by star polygon method. 13. EXAMPLES AND EXPRIMENTATION There are various methods for tracking that whichever are problems for example bubble tracking method that eliminates the background by using a cost factor used in follow up applications. The weakness of such method is in the separation process of the person from the background when they have a same limiting the background and Kalman filer color cloth with the background [5]. 8

Figure 8. Some samples of the results of the suggested Triangulation mesh method for a complex of selected nodes and added nodes in image corners by star polygon method Tracking by: In this method, bubbles achieved after elimination process of the background are used as signs for the object presence. The weakness of such method is that the linear Kalman filter equations are assumed but it is not applicable in real time situation [6]. Two suggested method are the principles of finding the object border based on the polygon making.this method prevents objects to reside in regions causing occlusion after finding the object border and after its tracking by the algorithm and uses the triangulation method for polygonal. 14. CONCLUSION Object tracking is considered as one of the most important problem in image processing activities and machine visual power. Tracking is regarded as monitoring the positional change of an object and following it in continuous videos based on a special goal. Algorithm proposals are also considered as the aspect of object tracking in situations such as light change, object occlusions, lack of objects movements with a constant speed or in a direct line with enough reliability and are at least applicable in real time applications.algorithm proposals are also considered as the aspect of object tracking in situations such as light change and object occlusions.adaptive mesh estimating method is a suitable tracking method that not only is able to recognize and track the related object in continuous frames, but also can provide a reliable and efficient reaction against the phenomena disturbing tracking process including performance efficiency in real-time applications. In this article, an effective method called adaptive mesh estimating method is introduced as a suitable tracking method in continuous frames and its preference and limitation is discussed. 9

REFERENCE [1] Gua, Z., Object detection and tracking in video, TR., Department of Computer Science, Kent State University, 2001. [2] Altunbasak, Y., Occlusion- adaptive, content based mesh design and forward tracking, IEEE Trans.Image Process, Vol. 6, pp.1270-1280, 1997. [3] Lucas, D. B., An iterative image registration technique with an application to stereo vision, in Proc. DARPA Image Understanding Workshop, pp. 121-130, 1981. [4] Horn, B., Determining optical flow, Artificial Intelligence Laboratory, Massachusetts Institute of Technology, pp. 185-203, 1980. [5] Owen J.,Hunter A.,Fletcher E., A fast modelfree morphologybased object tracking, In proceeding of british machine vision conference,september,2002. [6] Masoud O.,Papanikolopoulos N.P., A novel method for tracking and counting pedestrians in realtime using a single camera,ieee Transaction on Vehicular Technology,50:1267-1278,2001 [7] Dabo Guo, Zhaoyang Lu and Xingxing Hu, Content-Based Mesh Design and Occlusion Adaptive GOP Structure, IEEE Computer Society and International Conference on Digital Image Processing,2009. [8] Barron, J. L., Fleet, D. J. and Beauchemin, S. S., Performance of Optical Flow Techniques - InternationalJournal of Computer Vision, vol. 12, no. 1, pp. 43-77, 1994. [9] Bober, M. and Kittler, J., Robust Motion Analysis, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, pp. 947-952, 1994. 10