Detection, Segmentation, and Tracking of Moving Objects in UAV Videos

Size: px
Start display at page:

Download "Detection, Segmentation, and Tracking of Moving Objects in UAV Videos"

Transcription

1 Detection, Segmentation, and Tracking of Moving Objects in UAV Videos Michael Teutsch and Wolfgang Krüger Fraunhofer Institute of Optronics, System Technologies and Image Exploitation (IOSB) Fraunhoferstr. 1, Karlsruhe, Germany {michael.teutsch, Abstract Automatic processing of videos coming from small UAVs offers high potential for advanced surveillance applications but is also very challenging. These challenges include camera motion, high object distance, varying object background, multiple objects near to each other, weak signalto-noise-ratio (SNR), or compression artifacts. In this paper, a video processing chain for detection, segmentation, and tracking of multiple moving objects is presented dealing with the mentioned challenges. The fundament is the detection of local image features, which are not stationary. By clustering these features and subsequent object segmentation, regions are generated representing object hypotheses. Multi-object tracking is introduced using a Kalman filter and considering the camera motion. Split or merged object regions are handled by fusion of the regions and the local features. Finally, a quantitative evaluation of object segmentation and tracking is provided. I. INTRODUCTION UAV-based camera surveillance is widely used nowadays for reconnaissance, homeland security, or border protection. Robust tracking of single or multiple moving objects is important, but difficult to achieve. Camera motion, small object appearances of only few pixels in the image, changing object background, object aggregation, shading, or noise are prominent among the challenges. In this paper, we present a three-layer processing chain for multi-object tracking dealing with these challenges. A small UAV is used with a visual-optical camera directed perpendicularly to the ground. In the first layer, local image features are detected and categorized in stationary and moving features. Object hypotheses are generated in the second layer by moving feature clustering and appearance-based object segmentation. In the third layer, the outputs of the first and second layer are used for tracking including handling of split and merge situations, which occur when vehicles overtake each other. Some experiments with quantitative results demonstrate the effectiveness of our processing chain. Related Work: Related work is presented, which is at least partially covering all three layers and using aerial image data. Perera et al. [1] use KLT [2] features for image registration and Stauffer-Grimson background modeling for moving object detection. Multi-object tracking is performed using a Kalman filter and nearest-neighbor data association. Splits and merges are handled by track linking. Cao et al. [3] also use KLT features, which are tracked for two frames and used for image registration. Vehicles are detected as blobs in the difference image and tracking is implemented using motion grouping. Kumar et al. [4] use motion compensated difference images and change detection to find moving objects. Tracking is based on motion, appearance, and shape features. Yao et al. [5] compensate camera motion by estimating a global affine parametric motion model based on sparse optical flow. Blobs are extracted from the difference image with morphological operations and tracked using HSV color and geometrical features. Trajectories are stored in a graph structure for split and merge handling. For image registration, Ibrahim et al. [6] use SIFT/SURF features and RANSAC. Moving objects are detected by Gaussian mixture learning applied to difference images and shape/size estimation. Tracking is based on temporal matching of object characteristics such as shape, size, area, orientation, or color. Xiao et al. [7] simultaneously detect moving objects using a three-frame difference image and perform multiobject tracking with a probabilistic relation graph matching approach with an embedded vehicle behavior model. Reilly et al. [8] use Harris corners, SIFT descriptors, and RANSAC for image registration. Moving objects are detected using a difference image calculated along 10 frames. For tracking, bipartite graph matching is applied. Finally, in the work of Mundhenk et al. [9], moving objects are detected by subtraction of global motion from local motion maps. For object segmentation, a Mean Shift Kernel Density estimator is applied. Tracking is performed by fitting a Generalized Linear Model (GLS) between two consecutive frames for track verification. Object-related fingerprints are calculated and stored for reacquisition, if necessary. II. INDEPENDENT MOTION DETECTION The concept of the processing chain is visualized in Fig. 1. In the independent motion detection layer, local image features are detected, which show independent motion relative to the static background of the monitored scene. Corner feature tracking [2] is used to estimate homographies as global image transformations [10] for frame-toframe alignment as well as local relative images velocities. Independent motion is detected at features with significant relative velocities to discriminate features at moving objects from features at the static background.

2 camera motion local features regions multi-object tracking assignment of features to tracks assignment of regions to tracks split and merge handling ambiguous ok tracks init prediction Kalman filter update objects local feature clustering object segmentation object segmentation algorithm 1... object segmentation algorithm n fusion image sequence local feature detection and tracking homography estimation frame-to-frame homography independent motion detection moving features independent motion detection Figure 1. Concept of the UAV video processing chain. This approach does not require camera calibration and is largely independent of object appearance. Using homographies instead of plane+parallax decompositions with multiview geometric constraints [11] is adequate, since our image data is mainly from UAVs operating at higher altitudes. We do not use motion compensated difference images [4], [7], because we achieved more reliable results compared to [12], which is based on difference images. The estimated frame-to-frame homographies are used at two steps in our processing chain to compensate for global camera motion. During independent motion detection they are needed to estimate relative image velocities from feature tracks, and in the multi-object tracking layer the homographies are used to generate control input for the Kalman filter. The main output from the independent motion detection layer is the local image features classified as moving relative to the static background. The output attributes for each feature are image position and relative velocity. First, the moving features are used in the object segmentation layer to find motion regions by feature clustering and to trigger additional appearance-based region segmentation. Second, the moving features are used in conjunction with the segmented regions to apply multi-object tracking including track initialization. Fig. 2 gives an idea of typical results from the independent motion detection layer. Shown are the estimated relative velocity vectors for all 5356 feature tracks from which 297 have been correctly classified as coming from moving features. Note the large number of feature tracks at parked Figure 2. Example for detected and tracked stationary (red) and moving (yellow) local features. vehicles which have been correctly classified as part of the static background. The independent motion layer is able to reliably estimate and classify sub-pixel relative motion. III. OBJECT SEGMENTATION The aim of object segmentation is to generate object hypotheses from the moving local features. Therefore, clustering is applied followed by spatial fusion of several object segmentation algorithms to improve hypotheses reliability.

3 A. Local Feature Clustering The first processing step in the object segmentation layer is to cluster the detected moving features. We employ a single-linkage clustering using position and velocity estimates. The selection of distance thresholds is based on the known Ground Sampling Distance (GSD) and the expected size of vehicles. Especially in crowded scenes, over- or undersegmentation of objects close to each other and with similar motion cannot be avoided and additional appearance-based image features should be exploited. B. Object Segmentation Algorithms The calculated image features are different kinds of gradients. Each feature value is written to a feature-specific accumulator image. This accumulator is needed as we calculate multi-scale features for higher robustness and store the result in the same image. Three different kinds of gradient features have been implemented: 1) Korn gradient [13]: This is a linear gradient calculation method similar to Canny but with a normalized filter matrix. We directly use the gradient magnitudes without directions or non-maximum-suppression. 2) Morphological gradient [14]: By using the morphological operations erosion ( ) and dilation ( ) as well as quadratic structuring elements s i of different size, multi-scale gradient magnitudes are non-linearly calculated for image I and stored in accumulator A: n A = ((I s i ) (I s i )). (1) i=1 3) Local Binary Pattern (LBP) gradient: Rotationinvariant uniform Local Binary Patterns LBPP,R riu2 [15] are used to create a filter, which is calculating the LBP for each pixel position and testing it for being a texture primitive such as edge or corner or not. The assumption is that all LBPs, which are not texture primitives, are the result of noise. Hence, they are not considered for gradient calculation. For all accepted pixel positions, the local variance V AR P,R [15] of the LBP neighbors is calculated as gradient magnitude: P denotes the number of LBP neighbors and R the LBP radius. By calculating multi-scale LBPs [15] and using local standard-deviation instead of variance, higher robustness is achieved in accumulator A: A = R n r=r 1 V ARP,r, if LBP P,r accepted. (2) Contour pixels are detected in the accumulators by a standard connected-component labeling algorithm supported by quantile-based adaptive thresholding. This way, a binary image is generated, which is post-processed by morphological closing to fill holes in the object contours or blobs. The cluster accumulator connected components morpholog. closing object hypotheses Figure 3. Example for correct object segmentation (red) for a local feature cluster (cyan) with under-segmentation. best fitting bounding boxes to these blobs are the final result and the whole process is visualized in Fig. 3. Difference images [4], [5], [6], [7] can be used for object blob calculation, too, but especially slow vehicles need the difference of many consecutive images to create a continuous object blob and avoid over-segmentation, while objects driving in convoy may cause under-segmentation. Instead of connected-component labeling we also tried watershed segmentation [16]. But due to lack of contrast between object and background, not rarely the whole region was flooded. C. Spatial Fusion Since all calculated gradient features are similar, but not identical, spatial fusion is implemented by writing all gradient magnitudes to one common accumulator. Therefore, we first apply normalization of all feature-specific accumulators by mapping the accumulation values to the value range interval [0; 255]. Then, the values are added pixelwise and stored in the common accumulator. An alternative is pixelwise multiplication, which performed slightly worse in our tests. Over- and under-segmentation cannot be totally avoided, but a significant improvement is reached compared to local feature clustering as we show in Section V. Spatiotemporal fusion [17] is performing even better, but up to now the approach does not run in real-time. IV. MULTI-OBJECT TRACKING With multi-object tracking, stable object tracks are achieved and further improvement of the segmentation results is investigated especially in cases where vehicles overtake each other. Spatial information provided by segmentation and motion information provided by the local features is fused to handle such situations [18]. We decided for Kalman filter since object and camera motion is mostly linear in our application. Furthermore, it is easy to implement and fast. Five parameters are tracked by the Kalman filter: object center (x, y), size (w, l), and orientation α.

4 A. Assignment of Regions to Tracks We call the resulting oriented bounding boxes of object segmentation regions. They are assigned as measurements to already existing tracks and also used to initialize new tracks. A region is assigned to a specific track, if a minimum threshold for the bounding box intersection area of region and Kalman prediction is exceeded. The threshold is chosen small for high tolerance during validation gating. If this assignment is ambiguous for one or more regions or tracks, split and merge handling has to be applied (see Section IV-C). B. Assignment of Features to Tracks Local features are assigned to already existing tracks only, but not used for track initialization. The idea is to take them as support for split and merge handling. There are four criteria for local feature assignment [18]: 1) the feature is not assigned to any track, 2) its position is inside the Kalman prediction, 3) its position is not inside of another Kalman prediction, and 4) it has similar motion (magnitude and direction) as the track. If a feature is assigned, the related track parameters and the relative position within the track bounding box are stored for measurement reconstruction in case of split or merge. There is a maximum limit of 20 assigned features per track. Outliers with respect to position or motion are removed from the set. C. Split and Merge Handling Merge handling is needed mainly in overtaking situations, where object segmentation is not able to split the objects correctly. Each assigned feature reconstructs the measurement (region) of its track using the stored track-related parameters. This set of reconstructed measurements is fused with median filter for more stability. The power of this approach is demonstrated in Fig. 4. Four objects are under-segmented in the same cluster (left cyan cluster). Object segmentation is only able to segment regions, where the upper two and the lower two objects are still under-segmented. Merge handling is able to guarantee correct tracking (green boxes) based on the earlier assigned local features (green dots). Unassigned features are visualized as yellow dots. Split handling is necessary in overtaking situations, where already merged objects enter the camera s field of view. One track is initialized and during overtaking it is very difficult to split the objects. However, as soon as the regions are split correctly by object segmentation, the track will concentrate on one of the regions after some time and a new track is initialized for the other region. This process can be accelerated by assigning local features directly to the regions to estimate and compare their relative motion. If the relative motion difference is big enough, the track concentrates on only one region earlier. Furthermore, the split regions of one object, which can be a failure of object segmentation, have original image local feature clustering multi-object tracking Figure 4. Over-/under-segmented local feature clusters (cyan) with incorrect segmentation (red), but correct split/merge handling for tracking (green boxes) using assigned local features (green dots). similar motion and, thus, are correctly merged and assigned to one track. An example is given in Fig. 4 for the right cyan local feature cluster. D. Tracking with Kalman Filter As soon as an unambiguous assignment of measurements (regions) to tracks is achieved, the Kalman filter is applied. The bounding boxes after Kalman update, which are considered as stable objects after some tracking time, are the final result of the whole processing chain. Kalman prediction is performed for all tracks for the next time step. If a track did not get any assigned region or local feature, it is kept alive for a few time steps using Kalman prediction before it is deleted. The camera motion parameters are used to set up the control vector for the Kalman filter. This way, camera motion is considered for Kalman update and prediction. V. EXPERIMENTAL RESULTS The main test sequence consists of 370 frames with a resolution of pixels. Along the sequence 43 moving objects appear including several split and merge situations. Standard vehicle size is about 15 5 pixels. The evaluation is split in two parts: experiments for the stability of the local image features as well as the completeness and precision of object segmentation and tracking. A. Evaluation of the Local Image Features In summary, 5401 different moving features were detected and tracked during the whole test sequence. The mean lifetime of each feature was frames and the upper histogram of Fig. 5 shows the distribution of the features with respect to their lifetime. The first bin contains all features with a lifetime of 10 frames or less, the second 11 to 50 frames, and so on. For better visualization, the vertical axis scale is logarithmic. There are 221 features, which have a lifetime of 100 frames or more. Along the test sequence, there were 8863 assignments of local features to tracks. Several features have been assigned

5 number of local features number of local features tracking time of local features [in frames] assignment time of local features to tracks [in frames] Figure 5. Lifetime (yellow) and track assignment time (green) for all local features during the test sequence of 370 frames. Table I EVALUATION OF OBJECT SEGMENTATION COMPLETENESS: CORRECT, UNDER/OVER-SEGMENTATION (US/OS), AND MISS RATES. method correct US OS miss multi-object tracking spatial fusion morphological gradient LBP gradient Korn gradient local feature clustering more than once, especially if they were in the track s border area being sometimes inside or outside of the Kalman prediction. In the lower histogram of Fig. 5, all features are counted with respect to the time of being assigned to a track. 44 features were assigned to a track for 100 frames or more. Since there were 20 Kalman tracks with a lifetime of 100 frames or more, this means more than two local features accompany each track for its whole lifetime. Each of these long-living tracks had assigned features (20 is maximum) and 2.3 feature adds/losses per frame on average. B. Evaluation of Segmentation and Tracking Segmentation and multi-object tracking were evaluated for completeness and precision. 15 objects were manually labeled for position and size during 100 frames. Table I shows the completeness. Instances of an object being found, under-segmented (US), over-segmented (OS), or missed are counted. This means that two merged objects (US) are counted as two mistakes while one object with two segments (OS) is counted as one mistake. There are 56 % correctly found, 35 % under-segmented, and 9 % over-segmented objects for local feature clustering. The correct rates improve for the single object segmentation approaches and the fusion. Finally, there is no under-/over-segmentation for tracking and 93 % correctly found objects. Table II EVALUATION OF OBJECT SEGMENTATION PRECISION: MEAN ERRORS IN PIXELS FOR POSITION (x, y) AND SIZE (w, l). method e x e y e w e l multi-object tracking spatial fusion morphological gradient LBP gradient Korn gradient local feature clustering Table II shows the precision represented by mean errors for position x and y as well as width w and length l. Under/over-segmentation is producing the highest position and size errors. Hence, local feature clustering performed worst. Like in the evaluation of completeness, the results improve for the segmentation algorithms as well as the fusion. Highest precision is achieved by multi-object tracking with a mean error of 2.2 pixels for position, 4.5 pixels for width, and 8.4 pixels for length. When considering the known GSD, this corresponds to mean errors of 0.76 m for position, 1.55 m for w, and 2.9 m for l. Vertical object shading causes the immoderate error difference between w and l. Example results for each processing chain layer are shown in Fig. 6. VI. CONCLUSIONS In this paper, a processing chain is presented for precise tracking of multiple moving objects in UAV videos. Local image features are detected and tracked for frame-to-frame homography estimation. Stationary features are used for the compensation of camera motion and moving features to detect and cluster independent motion for initial object hypotheses. These hypotheses are improved by advanced gradient-based object segmentation algorithms, which are spatially fused for higher robustness. Finally, multi-object tracking is introduced using the object segments (regions) as measurements for a Kalman filter, the moving features for split and merge handling, and the camera motion parameters as control vector for the Kalman filter. In application with our UAV data, we achieved 93 % correctly detected moving objects and mean errors of 0.76 m for position, 1.55 m for width, and 2.9 m for length estimation. REFERENCES [1] A. G. A. Perera, C. Srinivas, A. Hoogs, G. Brooksby, and W. Hu, Multi-Object Tracking Through Simultaneous Long Occlusions and Split-Merge Conditions, in Proc. of the IEEE CVPR, New York, NY, USA, [2] J. Shi and C. Tomasi, Good features to track, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, [3] X. Cao, J. Lan, P. Yan, and X. Li, KLT Feature Based Vehicle Detection and Tracking in Airborne Videos, in Proc. of the Intern. Conf. on Image and Graphics, Hefei, China, 2011.

6 original image local features and clustering object segmentation multi-object tracking Figure 6. Example image (street) with independent motion detection (yellow vectors), local feature clustering (cyan boxes), object detection (red boxes), and multi-object tracking (green boxes) including assigned local features (green dots) and not assigned features (yellow dots). [4] R. Kumar, H. Sawhney, S. Samarasekera, S. Hsu, H. Tao, Y. Guo, K. Hanna, A. Pope, R. Wildes, D. Hirvonen, M. Hansen, and P. Burt, Aerial video surveillance and exploitation, Proc. of the IEEE, vol. 89, no. 10, Oct [5] F. Yao, A. Sekmen, and M. J. Malkani, Multiple moving target detection, tracking, and recognition from a moving observer, in Proc. of the IEEE Intern. Conf. on Information and Automation (ICIA), Hunan, China, Jun [6] A. W. N. Ibrahim, P. W. Ching, G. Seet, M. Lau, and W. Czajewski, Moving Objects Detection and Tracking Framework for UAV-based Surveillance, in Proc. of the Pacific-Rim Symposium on Image and Video Technology, Singapore, [7] J. Xiao, H. Cheng, H. Sawhney, and F. Han, Vehicle Detection and Tracking in Wide Field-of-View Aerial Video, in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, [8] V. Reilly, H. Idrees, and M. Shah, Detection and Tracking of Large Number of Targets in Wide Area Surveillance, in Proceedings of the 11th European Conference on Computer Vision (ECCV), Heraklion, Greece, Sep [9] T. N. Mundhenk, K.-Y. Ni, Y. Chen, K. Kim, and Y. Owechko, Detection of unknown targets from aerial camera and extraction of simple object fingerprints for the purpose of target reacquisition, in Proc. SPIE Vol. 8301, [10] R. Hartley and A. Zisserman, Multiple-View Geometry in Computer Vision. Cambridge University Press, [11] M. Irani and P. Anandan, A unified approach to moving object detection in 2D and 3D scenes, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 6, pp , Jun [12] N. Heinze, M. Esswein, W. Krüger, and G. Saur, Automatic image exploitation system for small UAVs, in Proceedings of SPIE Vol. 6946, [13] A. Korn, Toward a Symbolic Representation of Intensity Changes in Images, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 10, no. 5, pp , [14] J. S. J. Lee, R. M. Haralick, and L. G. Shapiro, Morphologic edge detection, IEEE Journal of Robotics and Automation, vol. 3, no. 2, pp , Apr [15] T. Ojala, M. Pietikäinen, and T. Mäenpää, Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns, IEEE Transact. on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp , [16] G. Bradski, The OpenCV Library, Dr. Dobb s Journal of Software Tools, [17] M. Teutsch and W. Krüger, Spatio-Temporal Fusion of Object Segmentation Approaches for Moving Distant Targets, in Proceedings of the International Conference on Information Fusion (FUSION), Singapore, Jul [18] M. Teutsch, W. Krüger, and J. Beyerer, Fusion of Region and Point-Feature Detections for Measurement Reconstruction in Multi-Target Kalman Tracking, in Proc. of the Intern. Conf. on Information Fusion (FUSION), Chicago, IL, USA, 2011.

7 Year: 2012 Author(s): Teutsch, Michael; Krüger, Wolfgang Title: Detection, segmentation, and tracking of moving objects in UAV videos DOI: /AVSS ( IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Details: Institute of Electrical and Electronics Engineers -IEEE-; IEEE Computer Society: IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance, AVSS Proceedings : September 2012, Beijing, China Los Alamitos, Calif.: IEEE Computer Society Conference Publishing Services (CPS), 2012 ISBN: ISBN: (Print) pp

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Human Motion Detection and Tracking for Video Surveillance

Human Motion Detection and Tracking for Video Surveillance Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,

More information

DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN. Gengjian Xue, Jun Sun, Li Song

DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN. Gengjian Xue, Jun Sun, Li Song DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN Gengjian Xue, Jun Sun, Li Song Institute of Image Communication and Information Processing, Shanghai Jiao

More information

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation

More information

Vehicle and Person Tracking in UAV Videos

Vehicle and Person Tracking in UAV Videos Vehicle and Person Tracking in UAV Videos Jiangjian Xiao, Changjiang Yang, Feng Han, and Hui Cheng Sarnoff Corporation {jxiao, cyang, fhan, hcheng}@sarnoff.com Abstract. This paper presents two tracking

More information

COMPUTER AND ROBOT VISION

COMPUTER AND ROBOT VISION VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington A^ ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California

More information

ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL

ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL Maria Sagrebin, Daniel Caparròs Lorca, Daniel Stroh, Josef Pauli Fakultät für Ingenieurwissenschaften Abteilung für Informatik und Angewandte

More information

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Features Points Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Finding Corners Edge detectors perform poorly at corners. Corners provide repeatable points for matching, so

More information

Motion Tracking and Event Understanding in Video Sequences

Motion Tracking and Event Understanding in Video Sequences Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!

More information

Automatic Shadow Removal by Illuminance in HSV Color Space

Automatic Shadow Removal by Illuminance in HSV Color Space Computer Science and Information Technology 3(3): 70-75, 2015 DOI: 10.13189/csit.2015.030303 http://www.hrpub.org Automatic Shadow Removal by Illuminance in HSV Color Space Wenbo Huang 1, KyoungYeon Kim

More information

Patch-based Object Recognition. Basic Idea

Patch-based Object Recognition. Basic Idea Patch-based Object Recognition 1! Basic Idea Determine interest points in image Determine local image properties around interest points Use local image properties for object classification Example: Interest

More information

A Comparison of SIFT, PCA-SIFT and SURF

A Comparison of SIFT, PCA-SIFT and SURF A Comparison of SIFT, PCA-SIFT and SURF Luo Juan Computer Graphics Lab, Chonbuk National University, Jeonju 561-756, South Korea qiuhehappy@hotmail.com Oubong Gwun Computer Graphics Lab, Chonbuk National

More information

Extracting Layers and Recognizing Features for Automatic Map Understanding. Yao-Yi Chiang

Extracting Layers and Recognizing Features for Automatic Map Understanding. Yao-Yi Chiang Extracting Layers and Recognizing Features for Automatic Map Understanding Yao-Yi Chiang 0 Outline Introduction/ Problem Motivation Map Processing Overview Map Decomposition Feature Recognition Discussion

More information

Face and Nose Detection in Digital Images using Local Binary Patterns

Face and Nose Detection in Digital Images using Local Binary Patterns Face and Nose Detection in Digital Images using Local Binary Patterns Stanko Kružić Post-graduate student University of Split, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture

More information

LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS

LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS 8th International DAAAM Baltic Conference "INDUSTRIAL ENGINEERING - 19-21 April 2012, Tallinn, Estonia LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS Shvarts, D. & Tamre, M. Abstract: The

More information

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.11, November 2013 1 Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial

More information

PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE

PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE Hongyu Liang, Jinchen Wu, and Kaiqi Huang National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Science

More information

Extracting Spatio-temporal Local Features Considering Consecutiveness of Motions

Extracting Spatio-temporal Local Features Considering Consecutiveness of Motions Extracting Spatio-temporal Local Features Considering Consecutiveness of Motions Akitsugu Noguchi and Keiji Yanai Department of Computer Science, The University of Electro-Communications, 1-5-1 Chofugaoka,

More information

Classification of objects from Video Data (Group 30)

Classification of objects from Video Data (Group 30) Classification of objects from Video Data (Group 30) Sheallika Singh 12665 Vibhuti Mahajan 12792 Aahitagni Mukherjee 12001 M Arvind 12385 1 Motivation Video surveillance has been employed for a long time

More information

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES An Undergraduate Research Scholars Thesis by RUI LIU Submitted to Honors and Undergraduate Research Texas A&M University in partial fulfillment

More information

A Feature Point Matching Based Approach for Video Objects Segmentation

A Feature Point Matching Based Approach for Video Objects Segmentation A Feature Point Matching Based Approach for Video Objects Segmentation Yan Zhang, Zhong Zhou, Wei Wu State Key Laboratory of Virtual Reality Technology and Systems, Beijing, P.R. China School of Computer

More information

Object Tracking Algorithm based on Combination of Edge and Color Information

Object Tracking Algorithm based on Combination of Edge and Color Information Object Tracking Algorithm based on Combination of Edge and Color Information 1 Hsiao-Chi Ho ( 賀孝淇 ), 2 Chiou-Shann Fuh ( 傅楸善 ), 3 Feng-Li Lian ( 連豊力 ) 1 Dept. of Electronic Engineering National Taiwan

More information

Research on Recognition and Classification of Moving Objects in Mixed Traffic Based on Video Detection

Research on Recognition and Classification of Moving Objects in Mixed Traffic Based on Video Detection Hu, Qu, Li and Wang 1 Research on Recognition and Classification of Moving Objects in Mixed Traffic Based on Video Detection Hongyu Hu (corresponding author) College of Transportation, Jilin University,

More information

A Texture-based Method for Detecting Moving Objects

A Texture-based Method for Detecting Moving Objects A Texture-based Method for Detecting Moving Objects M. Heikkilä, M. Pietikäinen and J. Heikkilä Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O. Box 4500

More information

III. VERVIEW OF THE METHODS

III. VERVIEW OF THE METHODS An Analytical Study of SIFT and SURF in Image Registration Vivek Kumar Gupta, Kanchan Cecil Department of Electronics & Telecommunication, Jabalpur engineering college, Jabalpur, India comparing the distance

More information

Color Image Segmentation

Color Image Segmentation Color Image Segmentation Yining Deng, B. S. Manjunath and Hyundoo Shin* Department of Electrical and Computer Engineering University of California, Santa Barbara, CA 93106-9560 *Samsung Electronics Inc.

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Detection and Tracking of Large Number of Targets in Wide Area Surveillance

Detection and Tracking of Large Number of Targets in Wide Area Surveillance Detection and Tracking of Large Number of Targets in Wide Area Surveillance Vladimir Reilly, Haroon Idrees, and Mubarak Shah vsreilly@eecs.ucf.edu,haroon.idrees@knights.ucf.edu, shah@eecs.ucf.edu Computer

More information

Final Exam Study Guide

Final Exam Study Guide Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

Optical flow and tracking

Optical flow and tracking EECS 442 Computer vision Optical flow and tracking Intro Optical flow and feature tracking Lucas-Kanade algorithm Motion segmentation Segments of this lectures are courtesy of Profs S. Lazebnik S. Seitz,

More information

Implementation of Optical Flow, Sliding Window and SVM for Vehicle Detection and Tracking

Implementation of Optical Flow, Sliding Window and SVM for Vehicle Detection and Tracking Implementation of Optical Flow, Sliding Window and SVM for Vehicle Detection and Tracking Mohammad Baji, Dr. I. SantiPrabha 2 M. Tech scholar, Department of E.C.E,U.C.E.K,Jawaharlal Nehru Technological

More information

Video Google: A Text Retrieval Approach to Object Matching in Videos

Video Google: A Text Retrieval Approach to Object Matching in Videos Video Google: A Text Retrieval Approach to Object Matching in Videos Josef Sivic, Frederik Schaffalitzky, Andrew Zisserman Visual Geometry Group University of Oxford The vision Enable video, e.g. a feature

More information

Face Recognition Using Vector Quantization Histogram and Support Vector Machine Classifier Rong-sheng LI, Fei-fei LEE *, Yan YAN and Qiu CHEN

Face Recognition Using Vector Quantization Histogram and Support Vector Machine Classifier Rong-sheng LI, Fei-fei LEE *, Yan YAN and Qiu CHEN 2016 International Conference on Artificial Intelligence: Techniques and Applications (AITA 2016) ISBN: 978-1-60595-389-2 Face Recognition Using Vector Quantization Histogram and Support Vector Machine

More information

Color Local Texture Features Based Face Recognition

Color Local Texture Features Based Face Recognition Color Local Texture Features Based Face Recognition Priyanka V. Bankar Department of Electronics and Communication Engineering SKN Sinhgad College of Engineering, Korti, Pandharpur, Maharashtra, India

More information

Object detection using non-redundant local Binary Patterns

Object detection using non-redundant local Binary Patterns University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2010 Object detection using non-redundant local Binary Patterns Duc Thanh

More information

A Street Scene Surveillance System for Moving Object Detection, Tracking and Classification

A Street Scene Surveillance System for Moving Object Detection, Tracking and Classification A Street Scene Surveillance System for Moving Object Detection, Tracking and Classification Huei-Yung Lin * and Juang-Yu Wei Department of Electrical Engineering National Chung Cheng University Chia-Yi

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

Object Recognition with Invariant Features

Object Recognition with Invariant Features Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user

More information

Local features: detection and description May 12 th, 2015

Local features: detection and description May 12 th, 2015 Local features: detection and description May 12 th, 2015 Yong Jae Lee UC Davis Announcements PS1 grades up on SmartSite PS1 stats: Mean: 83.26 Standard Dev: 28.51 PS2 deadline extended to Saturday, 11:59

More information

Local features and image matching. Prof. Xin Yang HUST

Local features and image matching. Prof. Xin Yang HUST Local features and image matching Prof. Xin Yang HUST Last time RANSAC for robust geometric transformation estimation Translation, Affine, Homography Image warping Given a 2D transformation T and a source

More information

Flooded Areas Detection Based on LBP from UAV Images

Flooded Areas Detection Based on LBP from UAV Images Flooded Areas Detection Based on LBP from UAV Images ANDRADA LIVIA SUMALAN, DAN POPESCU, LORETTA ICHIM Faculty of Automatic Control and Computers University Politehnica of Bucharest Bucharest, ROMANIA

More information

Aircraft Tracking Based on KLT Feature Tracker and Image Modeling

Aircraft Tracking Based on KLT Feature Tracker and Image Modeling Aircraft Tracking Based on KLT Feature Tracker and Image Modeling Khawar Ali, Shoab A. Khan, and Usman Akram Computer Engineering Department, College of Electrical & Mechanical Engineering, National University

More information

Particle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease

Particle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease Particle Tracking For Bulk Material Handling Systems Using DEM Models By: Jordan Pease Introduction Motivation for project Particle Tracking Application to DEM models Experimental Results Future Work References

More information

IMPROVING SPATIO-TEMPORAL FEATURE EXTRACTION TECHNIQUES AND THEIR APPLICATIONS IN ACTION CLASSIFICATION. Maral Mesmakhosroshahi, Joohee Kim

IMPROVING SPATIO-TEMPORAL FEATURE EXTRACTION TECHNIQUES AND THEIR APPLICATIONS IN ACTION CLASSIFICATION. Maral Mesmakhosroshahi, Joohee Kim IMPROVING SPATIO-TEMPORAL FEATURE EXTRACTION TECHNIQUES AND THEIR APPLICATIONS IN ACTION CLASSIFICATION Maral Mesmakhosroshahi, Joohee Kim Department of Electrical and Computer Engineering Illinois Institute

More information

CS 664 Segmentation. Daniel Huttenlocher

CS 664 Segmentation. Daniel Huttenlocher CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical

More information

Global Flow Estimation. Lecture 9

Global Flow Estimation. Lecture 9 Motion Models Image Transformations to relate two images 3D Rigid motion Perspective & Orthographic Transformation Planar Scene Assumption Transformations Translation Rotation Rigid Affine Homography Pseudo

More information

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection

More information

An Angle Estimation to Landmarks for Autonomous Satellite Navigation

An Angle Estimation to Landmarks for Autonomous Satellite Navigation 5th International Conference on Environment, Materials, Chemistry and Power Electronics (EMCPE 2016) An Angle Estimation to Landmarks for Autonomous Satellite Navigation Qing XUE a, Hongwen YANG, Jian

More information

Chapter 9 Object Tracking an Overview

Chapter 9 Object Tracking an Overview Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging

More information

Detecting and Identifying Moving Objects in Real-Time

Detecting and Identifying Moving Objects in Real-Time Chapter 9 Detecting and Identifying Moving Objects in Real-Time For surveillance applications or for human-computer interaction, the automated real-time tracking of moving objects in images from a stationary

More information

Image Analysis Lecture Segmentation. Idar Dyrdal

Image Analysis Lecture Segmentation. Idar Dyrdal Image Analysis Lecture 9.1 - Segmentation Idar Dyrdal Segmentation Image segmentation is the process of partitioning a digital image into multiple parts The goal is to divide the image into meaningful

More information

Multi-stable Perception. Necker Cube

Multi-stable Perception. Necker Cube Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

TEXTURE CLASSIFICATION METHODS: A REVIEW

TEXTURE CLASSIFICATION METHODS: A REVIEW TEXTURE CLASSIFICATION METHODS: A REVIEW Ms. Sonal B. Bhandare Prof. Dr. S. M. Kamalapur M.E. Student Associate Professor Deparment of Computer Engineering, Deparment of Computer Engineering, K. K. Wagh

More information

Detection and Tracking of Large Number of Targets in Wide Area Surveillance

Detection and Tracking of Large Number of Targets in Wide Area Surveillance Detection and Tracking of Large Number of Targets in Wide Area Surveillance Vladimir Reilly, Haroon Idrees, and Mubarak Shah Computer Vision Lab, University of Central Florida, Orlando, USA vsreilly@eecs.ucf.edu,

More information

CS 223B Computer Vision Problem Set 3

CS 223B Computer Vision Problem Set 3 CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.

More information

Fundamentals of Digital Image Processing

Fundamentals of Digital Image Processing \L\.6 Gw.i Fundamentals of Digital Image Processing A Practical Approach with Examples in Matlab Chris Solomon School of Physical Sciences, University of Kent, Canterbury, UK Toby Breckon School of Engineering,

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Introduction to Medical Imaging (5XSA0) Module 5

Introduction to Medical Imaging (5XSA0) Module 5 Introduction to Medical Imaging (5XSA0) Module 5 Segmentation Jungong Han, Dirk Farin, Sveta Zinger ( s.zinger@tue.nl ) 1 Outline Introduction Color Segmentation region-growing region-merging watershed

More information

Time Stamp Detection and Recognition in Video Frames

Time Stamp Detection and Recognition in Video Frames Time Stamp Detection and Recognition in Video Frames Nongluk Covavisaruch and Chetsada Saengpanit Department of Computer Engineering, Chulalongkorn University, Bangkok 10330, Thailand E-mail: nongluk.c@chula.ac.th

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Class 3: Advanced Moving Object Detection and Alert Detection Feb. 18, 2008

Class 3: Advanced Moving Object Detection and Alert Detection Feb. 18, 2008 Class 3: Advanced Moving Object Detection and Alert Detection Feb. 18, 2008 Instructor: YingLi Tian Video Surveillance E6998-007 Senior/Feris/Tian 1 Outlines Moving Object Detection with Distraction Motions

More information

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882 Matching features Building a Panorama Computational Photography, 6.88 Prof. Bill Freeman April 11, 006 Image and shape descriptors: Harris corner detectors and SIFT features. Suggested readings: Mikolajczyk

More information

An Introduction to Content Based Image Retrieval

An Introduction to Content Based Image Retrieval CHAPTER -1 An Introduction to Content Based Image Retrieval 1.1 Introduction With the advancement in internet and multimedia technologies, a huge amount of multimedia data in the form of audio, video and

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

ФУНДАМЕНТАЛЬНЫЕ НАУКИ. Информатика 9 ИНФОРМАТИКА MOTION DETECTION IN VIDEO STREAM BASED ON BACKGROUND SUBTRACTION AND TARGET TRACKING

ФУНДАМЕНТАЛЬНЫЕ НАУКИ. Информатика 9 ИНФОРМАТИКА MOTION DETECTION IN VIDEO STREAM BASED ON BACKGROUND SUBTRACTION AND TARGET TRACKING ФУНДАМЕНТАЛЬНЫЕ НАУКИ Информатика 9 ИНФОРМАТИКА UDC 6813 OTION DETECTION IN VIDEO STREA BASED ON BACKGROUND SUBTRACTION AND TARGET TRACKING R BOGUSH, S ALTSEV, N BROVKO, E IHAILOV (Polotsk State University

More information

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE COMPUTER VISION 2017-2018 > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE OUTLINE Optical flow Lucas-Kanade Horn-Schunck Applications of optical flow Optical flow tracking Histograms of oriented flow Assignment

More information

A Fast Moving Object Detection Technique In Video Surveillance System

A Fast Moving Object Detection Technique In Video Surveillance System A Fast Moving Object Detection Technique In Video Surveillance System Paresh M. Tank, Darshak G. Thakore, Computer Engineering Department, BVM Engineering College, VV Nagar-388120, India. Abstract Nowadays

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

Moving Object Detection for Video Surveillance

Moving Object Detection for Video Surveillance International OPEN ACCESS Journal Of Modern Engineering Research (IJMER) Moving Object Detection for Video Surveillance Abhilash K.Sonara 1, Pinky J. Brahmbhatt 2 1 Student (ME-CSE), Electronics and Communication,

More information

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37 Extended Contents List Preface... xi About the authors... xvii CHAPTER 1 Introduction 1 1.1 Overview... 1 1.2 Human and Computer Vision... 2 1.3 The Human Vision System... 4 1.3.1 The Eye... 5 1.3.2 The

More information

Classification of small Boats in Infrared Images for maritime Surveillance

Classification of small Boats in Infrared Images for maritime Surveillance Classification of small Boats in Infrared Images for maritime Surveillance Michael Teutsch Fraunhofer Institute of Optronics, System Technologies and Image Exploitation (IOSB) Fraunhoferstrasse 1, 76131

More information

A Texture-Based Method for Modeling the Background and Detecting Moving Objects

A Texture-Based Method for Modeling the Background and Detecting Moving Objects A Texture-Based Method for Modeling the Background and Detecting Moving Objects Marko Heikkilä and Matti Pietikäinen, Senior Member, IEEE 2 Abstract This paper presents a novel and efficient texture-based

More information

An Approach for Real Time Moving Object Extraction based on Edge Region Determination

An Approach for Real Time Moving Object Extraction based on Edge Region Determination An Approach for Real Time Moving Object Extraction based on Edge Region Determination Sabrina Hoque Tuli Department of Computer Science and Engineering, Chittagong University of Engineering and Technology,

More information

Contents I IMAGE FORMATION 1

Contents I IMAGE FORMATION 1 Contents I IMAGE FORMATION 1 1 Geometric Camera Models 3 1.1 Image Formation............................. 4 1.1.1 Pinhole Perspective....................... 4 1.1.2 Weak Perspective.........................

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

MULTI TARGET TRACKING ON AERIAL VIDEOS

MULTI TARGET TRACKING ON AERIAL VIDEOS ISPRS Istanbul Workshop 200 on Modeling of optical airborne and spaceborne Sensors, WG I/4, Oct. -3, IAPRS Vol. XXXVIII-/W7. MULTI TARGET TRACKING ON AERIAL VIDEOS Gellért Máttyus, Csaba Benedek and Tamás

More information

Implementation of a Face Recognition System for Interactive TV Control System

Implementation of a Face Recognition System for Interactive TV Control System Implementation of a Face Recognition System for Interactive TV Control System Sang-Heon Lee 1, Myoung-Kyu Sohn 1, Dong-Ju Kim 1, Byungmin Kim 1, Hyunduk Kim 1, and Chul-Ho Won 2 1 Dept. IT convergence,

More information

Selection of Scale-Invariant Parts for Object Class Recognition

Selection of Scale-Invariant Parts for Object Class Recognition Selection of Scale-Invariant Parts for Object Class Recognition Gy. Dorkó and C. Schmid INRIA Rhône-Alpes, GRAVIR-CNRS 655, av. de l Europe, 3833 Montbonnot, France fdorko,schmidg@inrialpes.fr Abstract

More information

Locating 1-D Bar Codes in DCT-Domain

Locating 1-D Bar Codes in DCT-Domain Edith Cowan University Research Online ECU Publications Pre. 2011 2006 Locating 1-D Bar Codes in DCT-Domain Alexander Tropf Edith Cowan University Douglas Chai Edith Cowan University 10.1109/ICASSP.2006.1660449

More information

BRIEF Features for Texture Segmentation

BRIEF Features for Texture Segmentation BRIEF Features for Texture Segmentation Suraya Mohammad 1, Tim Morris 2 1 Communication Technology Section, Universiti Kuala Lumpur - British Malaysian Institute, Gombak, Selangor, Malaysia 2 School of

More information

Multiple-Choice Questionnaire Group C

Multiple-Choice Questionnaire Group C Family name: Vision and Machine-Learning Given name: 1/28/2011 Multiple-Choice naire Group C No documents authorized. There can be several right answers to a question. Marking-scheme: 2 points if all right

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

An Edge-Based Approach to Motion Detection*

An Edge-Based Approach to Motion Detection* An Edge-Based Approach to Motion Detection* Angel D. Sappa and Fadi Dornaika Computer Vison Center Edifici O Campus UAB 08193 Barcelona, Spain {sappa, dornaika}@cvc.uab.es Abstract. This paper presents

More information

Object and Motion Recognition using Plane Plus Parallax Displacement of Conics

Object and Motion Recognition using Plane Plus Parallax Displacement of Conics Object and Motion Recognition using Plane Plus Parallax Displacement of Conics Douglas R. Heisterkamp University of South Alabama Mobile, AL 6688-0002, USA dheister@jaguar1.usouthal.edu Prabir Bhattacharya

More information

CAP 5415 Computer Vision Fall 2012

CAP 5415 Computer Vision Fall 2012 CAP 5415 Computer Vision Fall 01 Dr. Mubarak Shah Univ. of Central Florida Office 47-F HEC Lecture-5 SIFT: David Lowe, UBC SIFT - Key Point Extraction Stands for scale invariant feature transform Patented

More information

Motion in 2D image sequences

Motion in 2D image sequences Motion in 2D image sequences Definitely used in human vision Object detection and tracking Navigation and obstacle avoidance Analysis of actions or activities Segmentation and understanding of video sequences

More information

Scene Text Detection Using Machine Learning Classifiers

Scene Text Detection Using Machine Learning Classifiers 601 Scene Text Detection Using Machine Learning Classifiers Nafla C.N. 1, Sneha K. 2, Divya K.P. 3 1 (Department of CSE, RCET, Akkikkvu, Thrissur) 2 (Department of CSE, RCET, Akkikkvu, Thrissur) 3 (Department

More information

School of Computing University of Utah

School of Computing University of Utah School of Computing University of Utah Presentation Outline 1 2 3 4 Main paper to be discussed David G. Lowe, Distinctive Image Features from Scale-Invariant Keypoints, IJCV, 2004. How to find useful keypoints?

More information

Robust Horizontal Line Detection and Tracking in Occluded Environment for Infrared Cameras

Robust Horizontal Line Detection and Tracking in Occluded Environment for Infrared Cameras Robust Horizontal Line Detection and Tracking in Occluded Environment for Infrared Cameras Sungho Kim 1, Soon Kwon 2, and Byungin Choi 3 1 LED-IT Fusion Technology Research Center and Department of Electronic

More information

Planetary Rover Absolute Localization by Combining Visual Odometry with Orbital Image Measurements

Planetary Rover Absolute Localization by Combining Visual Odometry with Orbital Image Measurements Planetary Rover Absolute Localization by Combining Visual Odometry with Orbital Image Measurements M. Lourakis and E. Hourdakis Institute of Computer Science Foundation for Research and Technology Hellas

More information

Motion Detection and Segmentation Using Image Mosaics

Motion Detection and Segmentation Using Image Mosaics Research Showcase @ CMU Institute for Software Research School of Computer Science 2000 Motion Detection and Segmentation Using Image Mosaics Kiran S. Bhat Mahesh Saptharishi Pradeep Khosla Follow this

More information

Car tracking in tunnels

Car tracking in tunnels Czech Pattern Recognition Workshop 2000, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 2 4, 2000 Czech Pattern Recognition Society Car tracking in tunnels Roman Pflugfelder and Horst Bischof Pattern

More information

Spatio-Temporal LBP based Moving Object Segmentation in Compressed Domain

Spatio-Temporal LBP based Moving Object Segmentation in Compressed Domain 2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance Spatio-Temporal LBP based Moving Object Segmentation in Compressed Domain Jianwei Yang 1, Shizheng Wang 2, Zhen

More information

Multi-Camera Calibration, Object Tracking and Query Generation

Multi-Camera Calibration, Object Tracking and Query Generation MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Multi-Camera Calibration, Object Tracking and Query Generation Porikli, F.; Divakaran, A. TR2003-100 August 2003 Abstract An automatic object

More information

Postprint.

Postprint. http://www.diva-portal.org Postprint This is the accepted version of a paper presented at 14th International Conference of the Biometrics Special Interest Group, BIOSIG, Darmstadt, Germany, 9-11 September,

More information

Announcements. Recognition (Part 3) Model-Based Vision. A Rough Recognition Spectrum. Pose consistency. Recognition by Hypothesize and Test

Announcements. Recognition (Part 3) Model-Based Vision. A Rough Recognition Spectrum. Pose consistency. Recognition by Hypothesize and Test Announcements (Part 3) CSE 152 Lecture 16 Homework 3 is due today, 11:59 PM Homework 4 will be assigned today Due Sat, Jun 4, 11:59 PM Reading: Chapter 15: Learning to Classify Chapter 16: Classifying

More information