Methodology of context-dependent recognition of ground objects aboard unmanned aerial vehicle

Size: px
Start display at page:

Download "Methodology of context-dependent recognition of ground objects aboard unmanned aerial vehicle"

Transcription

1 Methodology of context-dependent recognition of ground objects aboard unmanned aerial vehicle Mukhina Maryna, Barkulova Iryna, Prymak Artem Abstract Two approaches were combined into one algorithm in order to reach better accuracy of ground object detection and tracking. Blob analysis provides the separation of detected object by morphological criterion (mostly minimum area threshold is used) and is used to determine the region of interest for further selection of feature points. Points are detected and described by SURF method. Due to combination the amount of processing information is decrease that leads to increasing of the accuracy and efficiency of algorithm. A search and flight correction algorithm for the most extended landmark by which UAV can be followed was proposed. According to the visual analysis of the effectiveness of this algorithm it is possible to get correct accurate results of location. The algorithm of the normalized cross correlation (NCC) calculation and its application to the template matching was also represented. Index Terms normalized cross correlation, speed up robust feature, binary large object I. INTRODUCTION Nowadays Unmanned Aerial Vehicle (UAV) is an advanced technology with a great potential to change radically the methods of wars and to get new civil applications. It is an inseparable part of civil and military programs. The importance of UAVs was fully demonstrated in recent years. Regardless of their purpose, the number and applications of UAVs will increase significantly in future. Today UAVs have an increasing role in many social missions such as mobile aerial monitoring operations, border patrol, wildlife survey, military training, search and rescue procedures, real-time monitoring, harmful and hazardous industrial facilities. Also they solve tasks of combat and military applications. The main goal of the monitoring systems is detection and recognition of the object, so object classification is an essential process for UAV monitoring system. Recognition problems for airborne video observation can be solved be means of the context-dependent classification. Applying classification, it is needed to get two aims: find the difference between different classes of objects and support the fusion process of bounding-boxes, which are spatially connected. We are focused on two classes: moving objects and stationary objects. Object detection is determined as the process of distinguishing interesting objects from the background. Fast and efficient object detection becomes important topic in the advanced computer vision systems of unmanned aerial vehicles (UAV). For fast and unpredicted environment observed from aboard UAV it is urgent to detect the contrast ground objects like targets, aerodromes and runways without beforehand learning. As for other applications of ground object detection there are surveillance systems with intelligent computer vision, especially for landing maneuvering. They can be combined with other tasks such as object position estimation where firstly it is necessary to detect the ground object, and only then to estimate its position in the region of interest (ROI). The main problems arising in ground object detection are variable dimensions of the output which is caused due to the large number of objects that can be present in any given frame of video sequence. Any machine learning algorithm requires a fixed dimension of input and output for the model to be trained. Another important problem for object detection systems is the requirement of real-time (30 fps) application with keeping the prescribed level of accuracy. The complex model requires correspondingly more time for inference. The compromise between accuracy and performance needs to be chosen. II. RELATED WORKS Application of context-dependent classification for recognition tasks is proposed in [1]. In the context-free classification, the starting point was the Bayesian classifier. Morphological features such as object form, area and eccentricity were considered through context-dependent classification and approximated by linear dependences with acceptable error variance. Realization of research was done in MATLAB 2014a on the set of serial images of known forms. The object of BlobAnalysis was created. As result, dependences which can be used for object recognition have been obtained, and further they can be used together with interesting point detectors, like SURF, for increasing the reliability and accuracy of object recognition. 850

2 On the bases of probabilistic models, such as Bayesian classifier and Markov chain model, the detection by only two features related to Binary Large Objects (BLOB) analyses was executed in [2]. The algorithm for fast calculation of the normalized cross correlation (NCC) and its application to the problem of template matching is represented in [4]. Given a templatet, whose position is to be determined in an image f, the basic idea of the algorithm is to represent the template, for which the normalized cross correlation is calculated, as a sum of rectangular basis functions. Then the correlation is calculated for each basis function instead of the whole template. The result of the correlation of the template t and the image f is obtained as the weighted sum of the correlation functions of the basic functions. III. PROBLEM STATEMENT The problem is formulated as following. A sequence of video frames with contrast ground object is given. It is necessary to detect the target with the only known description of brightness and area. SURF points are interested of only region of interest where the information from BLOB analysis is taken for the supposed object. SURF method [3] searches the feature points using Hessian matrix. Hessian matrix determinant reaches extreme at points of maximum change in the gradient of brightness. Hessian matrix for two-dimensional function and its determinants are defined as follows: 2 2 f f 2 x xy H f x, y, 2 2 f f 2 xy y f f f det H 2 2. x y xy It identifies spots, corners and border lines well. For each point the direction of maximum brightness and scale, taken from the scale factor of Hessian matrix are calculated. The gradient at a point is calculated using Haar filters. It should be noted, despite of the fact that SURF is used to find objects in the image, the method itself does not work with objects. SURF does not detect the object from the background. It considers the picture as a whole and looking for features of the image. Correlation function [5] for the SURF method can be executed by multiplying of two matrices of descriptors of reference and current images: T surf D reference Dcurrent T D (1) D ( N ) D(1) D( n) D (1) D ( N ) D (1) D ( n) Normalized cross correlation (NCC) can be found by multiplying of two matrices of descriptors of compared images: T E D D where D i, in both images. NCC i j D j matrices of found feature points i and j In order to solve the problem with identifying the object it is proposed to use SURF together with BLOB analysis. The blob method follows the approaches of standard image processing - segmentation. In pixel coordinates a blob is defined as a group of eight-connected rectangular pixels having intensities above threshold intensity. Blob objects can be analyzed and then their morphological characteristics are calculated like area, coordinates of centroid, perimeter, orientation, etc. And the idea of proposed algorithm is to combine both approaches. Blob analysis is used to select contrast objects with predefined characteristics and to determine ROI in which the further SURF points will be detected. A main criterion is a blob object, which can be easily calculated as: A 1, ( i, j) R where ( i, j ) are indexes of pixel in ROI R which are equal to 1, that is, related to foreground objects. Since from frame to frame there may be transformation of object form due to both UAV motion and object motion, then counting the changes with the help of homography matrix is proposed. It can be used to reshape ROI in order not to lose the object because of intensive UAV maneuvering. Correlation function for BLOB objects W and normalized cross correlation ( E NCC blob blob ) for it are also calculated. For detecting linear reference points the Hough Line Transform [6] is a transform used to detect straight lines. To apply the transform, first an edge detection pre-processing is desirable. In machine learning there is a relationship between a set of input features ( x1, x2, x 3...) and produced outputs ( y1, y2, y 3...). This relationship is built on a target function. And it is a function that is produced by a machine learning algorithm seeks to replicate. The actual estimate of this function than is produced by a machine learning algorithm is called a decision function. This decision function is selected from a set of possible functions which map the input features to the produced outputs. The aim of a learning algorithm is to classify feature vectors that are not present within the training set. This is called generalization. It is common that many machine learning algorithms will be able to correctly learn the training set, but may produce random predictions on any test data not in the training set. This phenomenon is known as overfitting and can lead to poor generalization performance. Commonly used machine learning algorithms are represented in [7]. 851

3 For Hough Transforms, we will express lines in the Polar system. Hence, a line equation can be written as: cos r y x. sin sin Arranging the terms: r xcos ysin. In general for each point x y, we can define the 0, 0 family of lines that goes through that point as: r x cos y sin It means that each pair ( r0, ) represents each line that passes by ( x 0, y 0 ). The Hough Line Transform keeps track of the intersection between curves of every point in the image. If the number of intersections is above some threshold, then it declares it as a line with the parameters, r of the intersection point. You may observe that the number of lines detected vary while you change the threshold. The higher threshold is, more lines will be detected. Correlation function for Hough transform objects W Hough and normalized cross correlation ( E ( W )) have to be calculated. E NCC ( W Hough ) = s NCC reference Hough a a + b b s current reference, current - standard deviations of reference and current images, [ a,b ] - vector of line of reference image, [ a,b ] - vector of line of current image. Algorithm of linear features combination was developed taking into account the possible significant geometric distortions. 3-dimensional target model is used as a template image, the current image, obtained from the sensor, is 2-dimensional. Then it is assumed, both, template and current images, were processed earlier and have represented the combination of contour lines. The algorithm includes the procedures of rough and exact search. Since the procedure of first type is the original part of the algorithm, then they will be investigated. 3-dimensional template model includes all the lines, visible for the sensor at the target sighting at the all potential angles. Every linear feature is described by means of coordinates of one of the target points of the line, its length and orientation. For correlation processing it is necessary to know some other parameters, connected with each segment, these parameters characterize uncertainty of line position. It is represented the algorithm of rough search (Fig. 1).Steps 1 and 4: There is control of the search of azimuth, angle of position and scale definition. Step 2: Since the set of target sighting is a priori known, the target coordinates of obtained current image could be calculated, but it is necessary take into account the directions of sensors at the moment of image reception. Using prior target coordinates as the beginning of transform, the contour features of current image, by means of intervals and orientation calculation, are transformed in Hough space. 852

4 Contour of search of azimuth and elevation Contour of search of scale value A prior estimation of position and errors statistics of navigation filter 1. Search by azimuth and elevation 2. Current model transform 1. Calculation of target coordinates. 2. Transform in Hough space using the target as original, definition of criteria correlation functions Wsurf, WHough, Wblob and their normalized correlation coefficients 3. Correction Current and reference images 3. Reference model transform 1. 3-dimensional model transform in 2-dimensional model 2. Transform in Hough space using the target as original, definition of criteria correlation functions Wsurf, WHough, Wblob and their 4. Search of scale value 1. Scale selection in search region 2. Reference model modification in accordance with chosen hypothesis of the scale 5. Calculation of correlation value for current values of azimuth and elevation and scale value 6. Selection of the best position To algorithm of exact search Fig.1. The algorithm of rough search of azimuth, elevation and scale definition. 853

5 Step 3. 2-dimensional template model is transformed in Hough space in the same way as the current image transform described in step 2. Step 4. Since the value of the scale was assumed as given, the template or current model is corrected to the correspondent scale. IV. RESULTS For studying the proposed algorithm, video sequence from aboard UAV was used. The flight above several recognizable objects like runway, ground marks and vehicles was done. There were clear background like green field and foreground objects. It was processed in MATLAB 2014a environment. [8]. The results of Blob detection can be visualized by the following (Fig. 1). Fig. 2. Workspace of Matlab for variable stats which contains three fields by default: Area, Centroid, BoundingBox. For the previously processed frame by blob selection the features points were found and detected which of them are inside ROI (it is taken two times much as BoundingBox from variable stats). The visualization of SURF detection is presented in Fig. 3. Fig. 3. Arrow-like object is identified with the help of blob and then SURF points are found Example of SURF points descriptor is presented in Fig. 4. Fig. 1. Blob segmentation and its colored visualization Example of object characteristics is presented in Fig. 2. Fig. 4. Workspace of Matlab for variable Ipts1 (interesting points) which contains such fields as: Centroid points (x, y), Scale (Gaussian sigma), Laplacian (0 or 1), Orientation (in radians), Descriptor (8x8 reshaped in 64x1) Example of the best matching SURF points in ROI determined from blob analysis is presented in Fig. 5. Matching was done using the method [8] proposed by author - using normalized correlation coefficient (NCC) by multiplication descriptor matrices of two images and finding best matches by checking the threshold of NCC > Fig. 5. Matching of the 1 st and 12 th frames 854

6 Volume xx, Issue xx, Month 20xx, ISSN: A search algorithm for the most extended landmark by which unmanned aerial vehicle can be followed by and implemented flight correction was proposed. With the help of Hough line transform it is possible identify lines in both images and videos. Roads and horizon line were detected by Hough line transform in the experimented images and video. The threshold value was varied during the work with images to obtain the best result of landmark detecting (Table 1). Visualization of the object detection with the greatest length, as linear landmark, which allows estimating unmanned aerial vehicle location, was obtained. The visual analysis of the effectiveness of this algorithm for inertial navigation system correction showed the algorithmic software is appropriate for use on unmanned aerial vehicle board and due to applying computer vision systems, gives as correct results of location determining as possible. TABLE I. CHANGES OF THRESHOLD PARAMETER Frame Threshold Quantity of Quantity of detected lines unique lines Fig 6. Received determination of the main road (horizon line is detected as landmark) V. CONCLUSIONS The results showed that the use of this technique, the orientation evaluation UAV allows us to solve a number of high-precision tasks such as accurate positioning and following along a landmark, the problem of targeting, automatic landing and others. For the proposed algorithm of combination Blob and SURF methods the object detection is done by combining the estimation of feature points in ROI of blob object with specified characteristics like area, perimeter, orientation, etc. Accuracy in prediction of before detected object location is provided by context-dependent classification. Results of experimental research show the high reliability of such approach and sufficient accuracy for estimation of geometric transforms. References [1] Mukhina M. P., Barkulova I. V. Algorithm of variative feature detection and prediction in context-dependent recognition //Electronics & Control Systems Т [2] Mukhina M. P., Barkulova I. V. Structure of aided classification of ground objects by video observation //Electronics & Control Systems. 2017]. Т [3] H. Bay, T. Tuytelaars, and L. Van Gool. SURF: Speeded up robust features. In Proceedings of the 9th European Conference on Computer Vision, [4] Briechle K., Hanebeck U. D. Template matching using fast normalized cross correlation //Optical Pattern Recognition XII. International Society for Optics and Photonics, Т С [5] Baklickij V. K., Bochkarev A. M., Musyakov M. P. Metody filtracii signalov v korrelyacionno-ekstremalnyh sistemah navigacii //M.: Radio i svyaz T. 216 [6] Mukhina M. P., Tkachenko O. Y., Barkulova I. V. Accuracy research method of the modified algorithm for detecting linear landmarks //Electronics & Control Systems Т [7] Hughes M. A framework for automated landmark recognition in community contributed image corpora : дис. Dublin City University, [8] Mukhina M. P., Barkulova I. V. Algorithm of ground object detection with modified SURF method by using morphological features IEEE 5th international conference on methods and systems of navigation and motion control (MSNMC), October 16-18, 2018, Kyiv. Maryna Mukhina is currently working as professor at Department of Aviation Computer-Integrated Complexes of National aviation university, Kiev, Ukraine. She received MScDegree in Electrical Engineering, Candidate of Sci (Engineering) Degree (2005) and Doctor of Sci (Engineering) Degree (2016) at National aviation university. Research interests include correlation-extreme aidedna vigation systems, data fusion algorithms, image processing. Author of 40 scientific papers, textbooks, articles and proceedings. Iryna Barkulova is post-graduate at Department of Aviation Computer-Integrated Complexes of National aviation university, Kiev, Ukraine. She received Master Degree of Computer-integrated technologies at National aviation university. Research interests include correlation-extreme aided navigation systems, data fusion algorithms, image processing. Author of 15 scientific articles. Artem Prymak is post-graduate student at Department of Aviation Computer-Integrated Complexes of National aviation university, Kiev, Ukraine. He received Master Degree of Computer-integrated technologies at National aviation university. Research interests machine learning, data fusion algorithms, adaprtive control systems, big data processing algorithms, computer vision. Author of 5 scientific articles. All Rights Reserved 2017 IJARCET 855

Recognition of a Predefined Landmark Using Optical Flow Sensor/Camera

Recognition of a Predefined Landmark Using Optical Flow Sensor/Camera Recognition of a Predefined Landmark Using Optical Flow Sensor/Camera Galiev Ilfat, Alina Garaeva, Nikita Aslanyan The Department of Computer Science & Automation, TU Ilmenau 98693 Ilmenau ilfat.galiev@tu-ilmenau.de;

More information

An Angle Estimation to Landmarks for Autonomous Satellite Navigation

An Angle Estimation to Landmarks for Autonomous Satellite Navigation 5th International Conference on Environment, Materials, Chemistry and Power Electronics (EMCPE 2016) An Angle Estimation to Landmarks for Autonomous Satellite Navigation Qing XUE a, Hongwen YANG, Jian

More information

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation

More information

Implementation and Comparison of Feature Detection Methods in Image Mosaicing

Implementation and Comparison of Feature Detection Methods in Image Mosaicing IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p-ISSN: 2278-8735 PP 07-11 www.iosrjournals.org Implementation and Comparison of Feature Detection Methods in Image

More information

Human Motion Detection and Tracking for Video Surveillance

Human Motion Detection and Tracking for Video Surveillance Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,

More information

Automatic Logo Detection and Removal

Automatic Logo Detection and Removal Automatic Logo Detection and Removal Miriam Cha, Pooya Khorrami and Matthew Wagner Electrical and Computer Engineering Carnegie Mellon University Pittsburgh, PA 15213 {mcha,pkhorrami,mwagner}@ece.cmu.edu

More information

Research on Multi-sensor Image Matching Algorithm Based on Improved Line Segments Feature

Research on Multi-sensor Image Matching Algorithm Based on Improved Line Segments Feature ITM Web of Conferences, 0500 (07) DOI: 0.05/ itmconf/070500 IST07 Research on Multi-sensor Image Matching Algorithm Based on Improved Line Segments Feature Hui YUAN,a, Ying-Guang HAO and Jun-Min LIU Dalian

More information

Small-scale objects extraction in digital images

Small-scale objects extraction in digital images 102 Int'l Conf. IP, Comp. Vision, and Pattern Recognition IPCV'15 Small-scale objects extraction in digital images V. Volkov 1,2 S. Bobylev 1 1 Radioengineering Dept., The Bonch-Bruevich State Telecommunications

More information

Pilot Assistive Safe Landing Site Detection System, an Experimentation Using Fuzzy C Mean Clustering

Pilot Assistive Safe Landing Site Detection System, an Experimentation Using Fuzzy C Mean Clustering International OPEN ACCESS Journal Of Modern Engineering Research (IJMER) Pilot Assistive Safe Landing Site Detection System, an Experimentation Using Fuzzy C Mean Clustering Jeena Wilson 1 1 Federal Institute

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

III. VERVIEW OF THE METHODS

III. VERVIEW OF THE METHODS An Analytical Study of SIFT and SURF in Image Registration Vivek Kumar Gupta, Kanchan Cecil Department of Electronics & Telecommunication, Jabalpur engineering college, Jabalpur, India comparing the distance

More information

FEATURE BASED IMAGE MATCHING FOR AIRBORNE PLATFORM

FEATURE BASED IMAGE MATCHING FOR AIRBORNE PLATFORM FEATURE BASED IMAGE MATCHING FOR AIRBORNE PLATFORM 1 HUMERA SIDDIQUA, 2 A.H.SHANTHAKUMARA, 3 MD. SHAHID 1 M. Tech(CNE), 2 Asst Professor, (Computer Science), SIT, Tumkur, Scientist E, ADE, DRDO E-mail:

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Ensemble of Bayesian Filters for Loop Closure Detection

Ensemble of Bayesian Filters for Loop Closure Detection Ensemble of Bayesian Filters for Loop Closure Detection Mohammad Omar Salameh, Azizi Abdullah, Shahnorbanun Sahran Pattern Recognition Research Group Center for Artificial Intelligence Faculty of Information

More information

Object Recognition Algorithms for Computer Vision System: A Survey

Object Recognition Algorithms for Computer Vision System: A Survey Volume 117 No. 21 2017, 69-74 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu Object Recognition Algorithms for Computer Vision System: A Survey Anu

More information

SCALE INVARIANT TEMPLATE MATCHING

SCALE INVARIANT TEMPLATE MATCHING Volume 118 No. 5 2018, 499-505 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu SCALE INVARIANT TEMPLATE MATCHING Badrinaathan.J Srm university Chennai,India

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

A Comparison of SIFT and SURF

A Comparison of SIFT and SURF A Comparison of SIFT and SURF P M Panchal 1, S R Panchal 2, S K Shah 3 PG Student, Department of Electronics & Communication Engineering, SVIT, Vasad-388306, India 1 Research Scholar, Department of Electronics

More information

MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK

MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK Mahamuni P. D 1, R. P. Patil 2, H.S. Thakar 3 1 PG Student, E & TC Department, SKNCOE, Vadgaon Bk, Pune, India 2 Asst. Professor,

More information

Practical Image and Video Processing Using MATLAB

Practical Image and Video Processing Using MATLAB Practical Image and Video Processing Using MATLAB Chapter 18 Feature extraction and representation What will we learn? What is feature extraction and why is it a critical step in most computer vision and

More information

Vehicle Ego-localization by Matching In-vehicle Camera Images to an Aerial Image

Vehicle Ego-localization by Matching In-vehicle Camera Images to an Aerial Image Vehicle Ego-localization by Matching In-vehicle Camera Images to an Aerial Image Masafumi NODA 1,, Tomokazu TAKAHASHI 1,2, Daisuke DEGUCHI 1, Ichiro IDE 1, Hiroshi MURASE 1, Yoshiko KOJIMA 3 and Takashi

More information

COMPUTER AND ROBOT VISION

COMPUTER AND ROBOT VISION VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington A^ ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California

More information

AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S

AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S Radha Krishna Rambola, Associate Professor, NMIMS University, India Akash Agrawal, Student at NMIMS University, India ABSTRACT Due to the

More information

IMAGE-GUIDED TOURS: FAST-APPROXIMATED SIFT WITH U-SURF FEATURES

IMAGE-GUIDED TOURS: FAST-APPROXIMATED SIFT WITH U-SURF FEATURES IMAGE-GUIDED TOURS: FAST-APPROXIMATED SIFT WITH U-SURF FEATURES Eric Chu, Erin Hsu, Sandy Yu Department of Electrical Engineering Stanford University {echu508, erinhsu, snowy}@stanford.edu Abstract In

More information

Extracting Layers and Recognizing Features for Automatic Map Understanding. Yao-Yi Chiang

Extracting Layers and Recognizing Features for Automatic Map Understanding. Yao-Yi Chiang Extracting Layers and Recognizing Features for Automatic Map Understanding Yao-Yi Chiang 0 Outline Introduction/ Problem Motivation Map Processing Overview Map Decomposition Feature Recognition Discussion

More information

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection Why Edge Detection? How can an algorithm extract relevant information from an image that is enables the algorithm to recognize objects? The most important information for the interpretation of an image

More information

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features

More information

UAV Position and Attitude Sensoring in Indoor Environment Using Cameras

UAV Position and Attitude Sensoring in Indoor Environment Using Cameras UAV Position and Attitude Sensoring in Indoor Environment Using Cameras 1 Peng Xu Abstract There are great advantages of indoor experiment for UAVs. Test flights of UAV in laboratory is more convenient,

More information

Chapter 9 Object Tracking an Overview

Chapter 9 Object Tracking an Overview Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging

More information

Face Recognition using SURF Features and SVM Classifier

Face Recognition using SURF Features and SVM Classifier International Journal of Electronics Engineering Research. ISSN 0975-6450 Volume 8, Number 1 (016) pp. 1-8 Research India Publications http://www.ripublication.com Face Recognition using SURF Features

More information

Increasing of accuracy of multipath ultrasonic flow meters by intelligent correction

Increasing of accuracy of multipath ultrasonic flow meters by intelligent correction Measurement Automation Monitoring, Dec 2016, no 12, vol 62, ISSN 2450-2855 411 Iryna GRYSHANOVA, Ivan KOROBKO, Pavlo POGREBNIY NATIONAL TECHNICAL UNIVERSITY OF UKRAINE «IGOR SIKORSKY KYIV POLITECHNIK INSTITUTE»,

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

Image Matching Using SIFT, SURF, BRIEF and ORB: Performance Comparison for Distorted Images

Image Matching Using SIFT, SURF, BRIEF and ORB: Performance Comparison for Distorted Images Image Matching Using SIFT, SURF, BRIEF and ORB: Performance Comparison for Distorted Images Ebrahim Karami, Siva Prasad, and Mohamed Shehata Faculty of Engineering and Applied Sciences, Memorial University,

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

07ATC 22 Image Exploitation System for Airborne Surveillance

07ATC 22 Image Exploitation System for Airborne Surveillance 07ATC 22 Image Exploitation System for Airborne Surveillance Karunakaran. P, Frederick Mathews, S.H.Padmanabhan, T.K.Sateesh HCL Technologies, India Copyright 2007 SAE International ABSTRACT The operational

More information

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1 Feature Detection Raul Queiroz Feitosa 3/30/2017 Feature Detection 1 Objetive This chapter discusses the correspondence problem and presents approaches to solve it. 3/30/2017 Feature Detection 2 Outline

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Computer Vision for HCI. Topics of This Lecture

Computer Vision for HCI. Topics of This Lecture Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi

More information

Fast Natural Feature Tracking for Mobile Augmented Reality Applications

Fast Natural Feature Tracking for Mobile Augmented Reality Applications Fast Natural Feature Tracking for Mobile Augmented Reality Applications Jong-Seung Park 1, Byeong-Jo Bae 2, and Ramesh Jain 3 1 Dept. of Computer Science & Eng., University of Incheon, Korea 2 Hyundai

More information

Automatic Tracking of Moving Objects in Video for Surveillance Applications

Automatic Tracking of Moving Objects in Video for Surveillance Applications Automatic Tracking of Moving Objects in Video for Surveillance Applications Manjunath Narayana Committee: Dr. Donna Haverkamp (Chair) Dr. Arvin Agah Dr. James Miller Department of Electrical Engineering

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Third Edition Rafael C. Gonzalez University of Tennessee Richard E. Woods MedData Interactive PEARSON Prentice Hall Pearson Education International Contents Preface xv Acknowledgments

More information

CAP 5415 Computer Vision Fall 2012

CAP 5415 Computer Vision Fall 2012 CAP 5415 Computer Vision Fall 01 Dr. Mubarak Shah Univ. of Central Florida Office 47-F HEC Lecture-5 SIFT: David Lowe, UBC SIFT - Key Point Extraction Stands for scale invariant feature transform Patented

More information

arxiv: v1 [cs.cv] 1 Jan 2019

arxiv: v1 [cs.cv] 1 Jan 2019 Mapping Areas using Computer Vision Algorithms and Drones Bashar Alhafni Saulo Fernando Guedes Lays Cavalcante Ribeiro Juhyun Park Jeongkyu Lee University of Bridgeport. Bridgeport, CT, 06606. United States

More information

INTELLIGENT transportation systems have a significant

INTELLIGENT transportation systems have a significant INTL JOURNAL OF ELECTRONICS AND TELECOMMUNICATIONS, 205, VOL. 6, NO. 4, PP. 35 356 Manuscript received October 4, 205; revised November, 205. DOI: 0.55/eletel-205-0046 Efficient Two-Step Approach for Automatic

More information

EDGE EXTRACTION ALGORITHM BASED ON LINEAR PERCEPTION ENHANCEMENT

EDGE EXTRACTION ALGORITHM BASED ON LINEAR PERCEPTION ENHANCEMENT EDGE EXTRACTION ALGORITHM BASED ON LINEAR PERCEPTION ENHANCEMENT Fan ZHANG*, Xianfeng HUANG, Xiaoguang CHENG, Deren LI State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing,

More information

Yudistira Pictures; Universitas Brawijaya

Yudistira Pictures; Universitas Brawijaya Evaluation of Feature Detector-Descriptor for Real Object Matching under Various Conditions of Ilumination and Affine Transformation Novanto Yudistira1, Achmad Ridok2, Moch Ali Fauzi3 1) Yudistira Pictures;

More information

SURF: Speeded Up Robust Features. CRV Tutorial Day 2010 David Chi Chung Tam Ryerson University

SURF: Speeded Up Robust Features. CRV Tutorial Day 2010 David Chi Chung Tam Ryerson University SURF: Speeded Up Robust Features CRV Tutorial Day 2010 David Chi Chung Tam Ryerson University Goals of SURF A fast interest point detector and descriptor Maintaining comparable performance with other detectors

More information

Dietrich Paulus Joachim Hornegger. Pattern Recognition of Images and Speech in C++

Dietrich Paulus Joachim Hornegger. Pattern Recognition of Images and Speech in C++ Dietrich Paulus Joachim Hornegger Pattern Recognition of Images and Speech in C++ To Dorothea, Belinda, and Dominik In the text we use the following names which are protected, trademarks owned by a company

More information

Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation

Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation Jong Taek Lee, M. S. Ryoo, Matthew Riley, and J. K. Aggarwal Computer & Vision Research Center Dept. of Electrical & Computer Engineering,

More information

SHIP RECOGNITION USING OPTICAL IMAGERY FOR HARBOR SURVEILLANCE

SHIP RECOGNITION USING OPTICAL IMAGERY FOR HARBOR SURVEILLANCE SHIP RECOGNITION USING OPTICAL IMAGERY FOR HARBOR SURVEILLANCE Dr. Patricia A. Feineigle, Dr. Daniel D. Morris, and Dr. Franklin D. Snyder General Dynamics Robotic Systems, 412-473-2159 (phone), 412-473-2190

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors Texture The most fundamental question is: How can we measure texture, i.e., how can we quantitatively distinguish between different textures? Of course it is not enough to look at the intensity of individual

More information

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles Proceedings of the International Conference of Control, Dynamic Systems, and Robotics Ottawa, Ontario, Canada, May 15-16 2014 Paper No. 54 A Reactive Bearing Angle Only Obstacle Avoidance Technique for

More information

Comparison of Feature Detection and Matching Approaches: SIFT and SURF

Comparison of Feature Detection and Matching Approaches: SIFT and SURF GRD Journals- Global Research and Development Journal for Engineering Volume 2 Issue 4 March 2017 ISSN: 2455-5703 Comparison of Detection and Matching Approaches: SIFT and SURF Darshana Mistry PhD student

More information

International Journal of Advanced Research in Computer Science and Software Engineering

International Journal of Advanced Research in Computer Science and Software Engineering ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: Fingerprint Recognition using Robust Local Features Madhuri and

More information

Robert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method

Robert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method Intro to Template Matching and the Lucas-Kanade Method Appearance-Based Tracking current frame + previous location likelihood over object location current location appearance model (e.g. image template,

More information

Short Survey on Static Hand Gesture Recognition

Short Survey on Static Hand Gesture Recognition Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of

More information

E0005E - Industrial Image Analysis

E0005E - Industrial Image Analysis E0005E - Industrial Image Analysis The Hough Transform Matthew Thurley slides by Johan Carlson 1 This Lecture The Hough transform Detection of lines Detection of other shapes (the generalized Hough transform)

More information

SURF. Lecture6: SURF and HOG. Integral Image. Feature Evaluation with Integral Image

SURF. Lecture6: SURF and HOG. Integral Image. Feature Evaluation with Integral Image SURF CSED441:Introduction to Computer Vision (2015S) Lecture6: SURF and HOG Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Speed Up Robust Features (SURF) Simplified version of SIFT Faster computation but

More information

ФУНДАМЕНТАЛЬНЫЕ НАУКИ. Информатика 9 ИНФОРМАТИКА MOTION DETECTION IN VIDEO STREAM BASED ON BACKGROUND SUBTRACTION AND TARGET TRACKING

ФУНДАМЕНТАЛЬНЫЕ НАУКИ. Информатика 9 ИНФОРМАТИКА MOTION DETECTION IN VIDEO STREAM BASED ON BACKGROUND SUBTRACTION AND TARGET TRACKING ФУНДАМЕНТАЛЬНЫЕ НАУКИ Информатика 9 ИНФОРМАТИКА UDC 6813 OTION DETECTION IN VIDEO STREA BASED ON BACKGROUND SUBTRACTION AND TARGET TRACKING R BOGUSH, S ALTSEV, N BROVKO, E IHAILOV (Polotsk State University

More information

Local Features: Detection, Description & Matching

Local Features: Detection, Description & Matching Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British

More information

Study on road sign recognition in LabVIEW

Study on road sign recognition in LabVIEW IOP Conference Series: Materials Science and Engineering PAPER OPEN ACCESS Study on road sign recognition in LabVIEW To cite this article: M Panoiu et al 2016 IOP Conf. Ser.: Mater. Sci. Eng. 106 012009

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Scott Smith Advanced Image Processing March 15, Speeded-Up Robust Features SURF

Scott Smith Advanced Image Processing March 15, Speeded-Up Robust Features SURF Scott Smith Advanced Image Processing March 15, 2011 Speeded-Up Robust Features SURF Overview Why SURF? How SURF works Feature detection Scale Space Rotational invariance Feature vectors SURF vs Sift Assumptions

More information

COMPUTER AND ROBOT VISION

COMPUTER AND ROBOT VISION VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington T V ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California

More information

NAVIGATION AND ELECTRO-OPTIC SENSOR INTEGRATION TECHNOLOGY FOR FUSION OF IMAGERY AND DIGITAL MAPPING PRODUCTS. Alison Brown, NAVSYS Corporation

NAVIGATION AND ELECTRO-OPTIC SENSOR INTEGRATION TECHNOLOGY FOR FUSION OF IMAGERY AND DIGITAL MAPPING PRODUCTS. Alison Brown, NAVSYS Corporation NAVIGATION AND ELECTRO-OPTIC SENSOR INTEGRATION TECHNOLOGY FOR FUSION OF IMAGERY AND DIGITAL MAPPING PRODUCTS Alison Brown, NAVSYS Corporation Paul Olson, CECOM Abstract Several military and commercial

More information

APPLICATION OF AERIAL VIDEO FOR TRAFFIC FLOW MONITORING AND MANAGEMENT

APPLICATION OF AERIAL VIDEO FOR TRAFFIC FLOW MONITORING AND MANAGEMENT Pitu Mirchandani, Professor, Department of Systems and Industrial Engineering Mark Hickman, Assistant Professor, Department of Civil Engineering Alejandro Angel, Graduate Researcher Dinesh Chandnani, Graduate

More information

Lecture 10 Detectors and descriptors

Lecture 10 Detectors and descriptors Lecture 10 Detectors and descriptors Properties of detectors Edge detectors Harris DoG Properties of detectors SIFT Shape context Silvio Savarese Lecture 10-26-Feb-14 From the 3D to 2D & vice versa P =

More information

FPGA IMPLEMENTATION FOR REAL TIME SOBEL EDGE DETECTOR BLOCK USING 3-LINE BUFFERS

FPGA IMPLEMENTATION FOR REAL TIME SOBEL EDGE DETECTOR BLOCK USING 3-LINE BUFFERS FPGA IMPLEMENTATION FOR REAL TIME SOBEL EDGE DETECTOR BLOCK USING 3-LINE BUFFERS 1 RONNIE O. SERFA JUAN, 2 CHAN SU PARK, 3 HI SEOK KIM, 4 HYEONG WOO CHA 1,2,3,4 CheongJu University E-maul: 1 engr_serfs@yahoo.com,

More information

CHAPTER 5 ARRAYS AND MATRICES. Copyright 2013 Pearson Education, Inc.

CHAPTER 5 ARRAYS AND MATRICES. Copyright 2013 Pearson Education, Inc. CHAPTER 5 ARRAYS AND MATRICES One-Dimensional Arrays When solving engineering problems the data consist of just a single number, and some other times we have hundreds of numbers that need to be identified

More information

SIFT: Scale Invariant Feature Transform

SIFT: Scale Invariant Feature Transform 1 / 25 SIFT: Scale Invariant Feature Transform Ahmed Othman Systems Design Department University of Waterloo, Canada October, 23, 2012 2 / 25 1 SIFT Introduction Scale-space extrema detection Keypoint

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Eye Detection by Haar wavelets and cascaded Support Vector Machine

Eye Detection by Haar wavelets and cascaded Support Vector Machine Eye Detection by Haar wavelets and cascaded Support Vector Machine Vishal Agrawal B.Tech 4th Year Guide: Simant Dubey / Amitabha Mukherjee Dept of Computer Science and Engineering IIT Kanpur - 208 016

More information

CS 223B Computer Vision Problem Set 3

CS 223B Computer Vision Problem Set 3 CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.

More information

Automatic License Plate Detection and Character Extraction with Adaptive Threshold and Projections

Automatic License Plate Detection and Character Extraction with Adaptive Threshold and Projections Automatic License Plate Detection and Character Extraction with Adaptive Threshold and Projections DANIEL GONZÁLEZ BALDERRAMA, OSSLAN OSIRIS VERGARA VILLEGAS, HUMBERTO DE JESÚS OCHOA DOMÍNGUEZ 2, VIANEY

More information

Determination of an Unmanned Mobile Object Orientation by Natural Landmarks

Determination of an Unmanned Mobile Object Orientation by Natural Landmarks Determination of an Unmanned Mobile Object Orientation by Natural Landmarks Anton M. Korsakov, Ivan S. Fomin, Dmitry A. Gromoshinsky, Alexandr V. Bakhshiev, Dmitrii N. Stepanov, EkaterinaY. Smirnova 1

More information

Key properties of local features

Key properties of local features Key properties of local features Locality, robust against occlusions Must be highly distinctive, a good feature should allow for correct object identification with low probability of mismatch Easy to etract

More information

Video Processing for Judicial Applications

Video Processing for Judicial Applications Video Processing for Judicial Applications Konstantinos Avgerinakis, Alexia Briassouli, Ioannis Kompatsiaris Informatics and Telematics Institute, Centre for Research and Technology, Hellas Thessaloniki,

More information

Corner Detection. Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology

Corner Detection. Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology Corner Detection Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology rhody@cis.rit.edu April 11, 2006 Abstract Corners and edges are two of the most important geometrical

More information

Face and Nose Detection in Digital Images using Local Binary Patterns

Face and Nose Detection in Digital Images using Local Binary Patterns Face and Nose Detection in Digital Images using Local Binary Patterns Stanko Kružić Post-graduate student University of Split, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture

More information

The Population Density of Early Warning System Based On Video Image

The Population Density of Early Warning System Based On Video Image International Journal of Research in Engineering and Science (IJRES) ISSN (Online): 2320-9364, ISSN (Print): 2320-9356 Volume 4 Issue 4 ǁ April. 2016 ǁ PP.32-37 The Population Density of Early Warning

More information

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37 Extended Contents List Preface... xi About the authors... xvii CHAPTER 1 Introduction 1 1.1 Overview... 1 1.2 Human and Computer Vision... 2 1.3 The Human Vision System... 4 1.3.1 The Eye... 5 1.3.2 The

More information

Flood-survivors detection using IR imagery on an autonomous drone

Flood-survivors detection using IR imagery on an autonomous drone Flood-survivors detection using IR imagery on an autonomous drone Sumant Sharma Department of Aeronautcs and Astronautics Stanford University Email: sharmas@stanford.edu Abstract In the search and rescue

More information

Motion Tracking and Event Understanding in Video Sequences

Motion Tracking and Event Understanding in Video Sequences Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!

More information

WAVELET TRANSFORM BASED FEATURE DETECTION

WAVELET TRANSFORM BASED FEATURE DETECTION WAVELET TRANSFORM BASED FEATURE DETECTION David Bařina Doctoral Degree Programme (1), DCGM, FIT BUT E-mail: ibarina@fit.vutbr.cz Supervised by: Pavel Zemčík E-mail: zemcik@fit.vutbr.cz ABSTRACT This paper

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains

Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains PhD student: Jeff DELAUNE ONERA Director: Guy LE BESNERAIS ONERA Advisors: Jean-Loup FARGES Clément BOURDARIAS

More information

3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT

3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT 3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT V. M. Lisitsyn *, S. V. Tikhonova ** State Research Institute of Aviation Systems, Moscow, Russia * lvm@gosniias.msk.ru

More information

Object Detection by Point Feature Matching using Matlab

Object Detection by Point Feature Matching using Matlab Object Detection by Point Feature Matching using Matlab 1 Faishal Badsha, 2 Rafiqul Islam, 3,* Mohammad Farhad Bulbul 1 Department of Mathematics and Statistics, Bangladesh University of Business and Technology,

More information

Vehicle Detection Using Gabor Filter

Vehicle Detection Using Gabor Filter Vehicle Detection Using Gabor Filter B.Sahayapriya 1, S.Sivakumar 2 Electronics and Communication engineering, SSIET, Coimbatore, Tamilnadu, India 1, 2 ABSTACT -On road vehicle detection is the main problem

More information

Copyright Detection System for Videos Using TIRI-DCT Algorithm

Copyright Detection System for Videos Using TIRI-DCT Algorithm Research Journal of Applied Sciences, Engineering and Technology 4(24): 5391-5396, 2012 ISSN: 2040-7467 Maxwell Scientific Organization, 2012 Submitted: March 18, 2012 Accepted: June 15, 2012 Published:

More information

LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS

LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS 8th International DAAAM Baltic Conference "INDUSTRIAL ENGINEERING - 19-21 April 2012, Tallinn, Estonia LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS Shvarts, D. & Tamre, M. Abstract: The

More information

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han Computer Vision 10. Segmentation Computer Engineering, Sejong University Dongil Han Image Segmentation Image segmentation Subdivides an image into its constituent regions or objects - After an image has

More information

Handwritten Hindi Numerals Recognition System

Handwritten Hindi Numerals Recognition System CS365 Project Report Handwritten Hindi Numerals Recognition System Submitted by: Akarshan Sarkar Kritika Singh Project Mentor: Prof. Amitabha Mukerjee 1 Abstract In this project, we consider the problem

More information

Detection of Rooftop Regions in Rural Areas Using Support Vector Machine

Detection of Rooftop Regions in Rural Areas Using Support Vector Machine 549 Detection of Rooftop Regions in Rural Areas Using Support Vector Machine Liya Joseph 1, Laya Devadas 2 1 (M Tech Scholar, Department of Computer Science, College of Engineering Munnar, Kerala) 2 (Associate

More information

Schedule for Rest of Semester

Schedule for Rest of Semester Schedule for Rest of Semester Date Lecture Topic 11/20 24 Texture 11/27 25 Review of Statistics & Linear Algebra, Eigenvectors 11/29 26 Eigenvector expansions, Pattern Recognition 12/4 27 Cameras & calibration

More information

Tracking in image sequences

Tracking in image sequences CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Tracking in image sequences Lecture notes for the course Computer Vision Methods Tomáš Svoboda svobodat@fel.cvut.cz March 23, 2011 Lecture notes

More information

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences UNIVERSITY OF OSLO Faculty of Mathematics and Natural Sciences Exam: INF 4300 / INF 9305 Digital image analysis Date: Thursday December 21, 2017 Exam hours: 09.00-13.00 (4 hours) Number of pages: 8 pages

More information

VEHICLE QUEUE DETECTION IN SATELLITE IMAGES OF URBAN AREAS

VEHICLE QUEUE DETECTION IN SATELLITE IMAGES OF URBAN AREAS VEHICLE QUEUE DETECTION IN SATELLITE IMAGES OF URBAN AREAS J. Leitloff 1, S. Hinz 2, U. Stilla 1 1 Photogrammetry and Remote Sensing, 2 Remote Sensing Technology Technische Universitaet Muenchen, Arcisstrasse

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 10 130221 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Canny Edge Detector Hough Transform Feature-Based

More information